Sunday, June 19, 2011

Does not compute ... E pur si muove

"As a kid, I used to worry that at bedtime I'd accidentally start thinking about thinking, and would be unable to fall asleep. That bottomless iteration was so beguiling it would sometimes take me an hour to let it go." Libbey White

After asking a newly minted robot to declare "Hello world!", my thoughts turned to robotic mental instability, chaos, and self-destruction. Classically this is done by asking the robot a paradoxical question to which it responds "Does not compute" repeatedly before its tiny 8086 brain overheats, causing ignition of magnetic tape reels followed by high voltage electrical discharges. This procedure can be performed on any old computer. Some are resistant, but none immune.

In addition to paradoxes, questions about their emotional state, existential pondering and simple bad logic (NOT-logic?) are also popular routes to robotic insanity. Some hardware is vulnerable to a killer poke, and specialty CPUs include a convenient Halt and Catch Fire machine code instruction.

Tight recursive loops are simple to implement and widely admired for their destructive power.  Their logical robustness adds to their deep appeal. A function that calls itself (a recursion), can lead to good or evil:

    bool foobar() {

        bool never;

        print( "Echo..." );         
        never = foobar();

        return never;    
    }

The power of this construction lies not in its endless echoing, but in the need for the computer to keep track of which instance of foobar() is currently executing -- a record (call stack) of the instances to which it will never return. The computer will require some serious resources (stack space) to accomplish its goal; universal domination or self-destruction are the only feasible outcomes.

While computers can talk back to us, robots have a body. Their sensory organs interact with the programs they run on and the actions they themselves produce and the environment they see or feel. Their output is fed into their own input, and feedback can't always be controlled. Generally a robot's response in any environment is non-linear, and is often chaotic and unpredictable. So robots are naturally appealing for their inherent ability to fail in spectacular ways. A simple programming loop is all that is needed, with the physical I/O providing recursion. Memory is not an issue, unlike software recursion where every available particle in the universe might be commandeered for memory.

It is useful to diagram a robot's body and brain, to show the relationship between the brain, body and environment. My plan for a simple functional robot that embodies the potential for self-annihilation and/or apocalypse starts with this system:
The robot brain (program/memory/CPU) is contained in the box. The system is unstable when the average input is near the threshold value: no matter what the current state of the light and photosensor, the state will change after the brain transforms the input to the output.

I implemented this robot on the Android Open Accessory Development Kit (ADK). The computer (Arduino microcontroller) has a body (the physical input/output), with light sensing and emitting devices mounted on the ADK shield:

ADK shield: Arrows point to the photoresistor (light dependent resistor, LDR) at left, and the light (light emitting diode, LED) at right

The photoresistor and LED would be cheap and easy to implement without this shield, but they were already conveniently wired and physically close to each other. But, unlike the diagram, they are not facing each other: both face away from the board so little of the LED's light normally reaches the photoresistor.

This deficiency is easily remedied by holding a reflective or light scattering surface above them. This allows for physical control of the photonic coupling between the devices. Now it is a robot that interacts in a non-trivial way with its environment:


The robot brain is implemented as a main loop function, which is repeatedly executed as fast as the microprocessor can go. Its pseudo-code looks something like this:

// Initialize a global threshold based on 
// average ambient lighting conditions.
threshold = 500;  // a value below light sensor output in ambient 
                  // conditions, where the sensor range is about [0,880]
loop(){


  // Get current state of the light sensor.
  nVin = ReadInput( PHOTORESISTOR ); 

  // Set new state of the light.
  if ( nVin >= threshold ) WriteOutput( LED, 0 ); // turn LED off
  else                    WriteOutput( LED, 1 ); // turn LED on
}

Also see full C code for an Arduino below.


The system has a constant-OFF regime (in bright light), a constant-ON regime (in dim light), near-periodic regimes and fully non-periodic regimes (both in dim light with a nearby scattering/reflecting surface). The boundary between periodic on non-periodic regimes are at the edge of chaos. The average flash rate and apparent brightness is controlled by the relative contribution of scattered light in this unstable regime.

Runaway oscillation is successfully achieved, although it appears that hardware limitations prevent immediate self-destruction. Step-wise execution of the loop is controlled by the CPU clock, which limits the oscillation frequency to a value considerably less than infinite. What is the maximum oscillation frequency? It is some small fraction of the processor clock rate (16 MHz for the Arduino Mega 2560), as many cycles are required for each loop execution.

Alas, the robot doesn't cause any damage to itself or the lab. Replacing the LEDs with high power lasers might improve performance. 


The mathematics of systems like this (non-linear dynamical systems) are poorly understood and resistant to solutions using continuous functions, or any periodic functions. If the sensor elements are strictly linear (for example if the photoresistor response is directly and exactly proportional to the incident light level), and the output has exactly two states (0/1, or ON/OFF), then the system can in principle have only steady, or strictly periodic, states. But photoresistors, LEDs, and CPUs are not linear, and have states that are only approximately binary. While the system is deterministic, its behavior is not predictable. 

Adjusting the amount of light reflected to the photoresister to the boundary between roughly periodic and constant state results in a strongly chaotic, nonlinear dynamical system. Such systems are important because of their sensitivity to the precise conditions, environmental lighting and nearby light-scattering surfaces in this example. This makes them valuable to robots for sensing and information processing, a core function of any intelligent machine.

We tend to regard technology as predicatable, and computers are designed to be as predictable as is feasible. But robots have sensory elements and environmental conditions that are not linear. That chaos is dominant in such simple systems, with only a simple transition rule and a pair of coupled elements, is surprising to robots and people. Does not compute ... E pur si muove!



Appendix:

C code for the Android Open Accessory Development Kit (ADK), or any Arduino with an LED and photoresistor on the appropriate pins:

//
// feedback_blinker_1
//
// Light feedback system on Android Open Accessory Development Kit (ADK).
// Demonstrates a simple dynamical system with non-trivial behavior,
// including a non-periodic regime.
//
// created by Mark Dow, June 17, 2011
//

#define  LED_2_RED        5
#define  LED_2_BLUE       6
#define  LED_2_GREEN      7
#define  PHOTORESISTOR   A2

// Initialize a threshold based on average ambient lighting conditions.
int nThreshold = 500;  // a value below light sensor output in ambient 
                       // conditions, where the sensor range is about [0,880]

int nVin = 0;
int nLoopCount = 0;

void setup(){
  pinMode( LED_2_BLUE, OUTPUT );
  pinMode( PHOTORESISTOR, INPUT );
  Serial.begin( 9600 );
}

void loop(){

  nVin = analogRead( PHOTORESISTOR );

  if ( nVin > nThreshold ) {
    digitalWrite( LED_2_BLUE, HIGH );  // off
  }
  else {
    digitalWrite( LED_2_BLUE, LOW );     // on
  } 
  
//  delay(100);  // slow the loop execution frequency

//  // Every 1000 times through the loop, write stats to the serial monitor.
     //  // Note: This writing to the serial port takes about 10 ms, slowing the
     //  // loop frequency.
//  ++nLoopCount;
//  if ( nLoopCount == 1000 ) {
//    Serial.print( nVin );        // column 1: current input value
//    Serial.print( "\t" );
//    Serial.println( millis() );  // column 2: time since start, in milliseconds
//    nLoopCount = 0;
//  } 
}

Programmers Anonymous notes, 1001

Awseome protogeek photo.


Recursion Weekly, with recursive QR code.


Nice in depth article about the development of the SX-70, Edwin Land's vision for instant photography. Compares how Land and Steve Jobs viewed development and marketing of technology:

Polaroid’s SX-70: The Art and Science of the Nearly Impossible A man, a company, and the most wildly ambitious consumer-electronics device of its era. By


Quad core tablets for gaming will also be useful for image processing.

Nvidia touts quad-core Kal-El chip in Android tablet

 
Bill Griggs post about Minibloq, describing the graphical programming interface for Arduino.


"They're Made out of Meat", a thought provoking and hilarious movie short. Sean Michael Regan comments:
Bre Pettis showed me this video, which dates from at least four years ago and is derived from a short story by Terry Bisson, at Bay Area Maker Faire 2011 last weekend, and I can’t stop giggling over it. The video shows more than a smidge of Twin Peaks styling, plus a nod to aliens Kang and Kodos from The Simpsons (“We are merely exchanging long protein strings…”) at the end.

The good bad and ugly sides of computer science as a passion and college major; commentary from the NY Times' "Computer Science's 'Sputnik Moment'?":
Cyberspace is the anytime, anywhere laboratory where you can design and run your own experiments by writing just a little software. It’s affordable by anyone with access to the Internet. And each piece of software is an individual’s expression of creativity, much like poetry or music. Computer science can be fun and empowering.
...
Computational thinking facilitates the invention and manipulation of abstractions (e.g., algorithms), which is central to managing complexity and building scalable systems. Students know that by majoring in computer science, they will have an edge over those without those skills. They will be in high demand by all sectors, not just by information technology companies. (Jeannette M. Wing, head of the computer science department at Carnegie Mellon University)
Katy Snyder again: "You should not use introductory computer science courses as filters (to wash out students) but as launching pads." For sure, serious intellectual demands and "concentrating on the practical applications of computer science" can help a lot. But from the starting point of a practical application course focus, I would like to hear about some creative teaching practices aimed at helping undergraduates find the courage to dig deep. Hard work and play can co-exist.(John M. Staudenmaier, S.J., was editor in chief of Technology and Culture from 1995 to 2010. He is assistant to the president for mission and identity at the University of Detroit Mercy)
I have firsthand experience in the individualistic, “let’s see who can code the shortest, most obscure-program to do x” cowboy culture of programming. That culture is unappealing to many women -- as well as many men. I can only wonder how much of the repeated, massive failure of large software development systems is related to such bad practices and the weeding out of the very kind of people who could help counterbalance them. (Zeynep Tufekci, assistant professor of sociology at the University of Maryland, Baltimore County)

I don't see a downside to unironic Bronydom. From Wired: 
"My Little Pony Corrals Unlikely Fanboys Known as ‘Bronies’"

Rhinestone meme toaster.

Saturday, June 4, 2011

Programmers Anonymous notes, 1000

 "While other visitors gazed at the workings of the beautiful instrument with the sort of expression, and I dare say the sort of feeling, that some savages are said to have shown on first seeing a looking-glass or hearing a gun - if indeed, they had a strong an idea of its marvelousness - Miss Byron, young as she was, understood its working, and saw the great beauty of the invention.Augustus De Morgan commenting on Ada Lovelace (then Miss Byron) on the occasion of Ada's first introduction to Charles Babbage's difference engine, on June 5, 1833. (from Lovelace met Babbage on this day 1833)


Tiny blinking lights:

We wrote, installed and tested a program that controls an embedded processor (an Arduino Mega 2560), a general purpose computer on a chip. The program caused the device to signal "Hello world!" in a way I understand; tiny blinking lights. A brilliant twinkling, because if you can blink some lights any way you want, just about anything can be controlled.

Far more sophisticated computers are now common -- for example any cell phone. But this one is designed to be inexpensive, easy to program, open source with open hardware design, and has flexible I/O with external digital or analog electronic devices. This particular microcontoller is also hooked up to a physical interface (hardware parts) and firmware designed for serial communication with Android controlled devices. It is suitable for use as the executive control system of a robot brain. The whole package is a beta version of the Android Open Accessory Development Kit (ADK) (also see Why Google Choosing Arduino Matters and is This the End of “Made for iPod” (TM)?).

An ADK Arduino device might be called an "Android accessory", because it can be controlled by an Android device (like a smart-phone running an Android operating system). Here is a short video of the DemoKit Android application showing bi-directional communication between a handheld Android device and an ADK device. 

But because the ADK device is configured as a USB host controller, it is the Arduino that controls the Android device. So an Android device is actually an accessory to the ADK device.

This example doesn't communicate with Android at all, running alone on the ADK Arduino device.

Here's the ADK device's top side (left), the device with a red and blue light blinking, and the device swept across the camera frame to show that the lights are on for brief periods and with changing relative color (each circle is illuminated by two LEDs, red and blue):

Blinking lights are a central element of physics (a photon is the smallest and briefest blinking light) and a wide range of technologies (for example telecommunication and video monitors). They happen to be close to my heart and work. I've spent a lot of time in the last decade blinking lights in front of people and seeing how their brains blink back. Conceptually everything from motors, to packets, to robots, to the nervous system of animals is composed of, among other things, tiny blinking lights.

Anyone is welcome to borrow this kit for tinkering. The Arduino part is cheap, available, and all that is required, except for extra LEDs, motors, etc.. If you make a robot for me I'll buy you an extra microcontroller and board in exchange.

Did I mention that the hardware and firmware is open source? While this kit, designed and commissioned by Google, is expensive and not currently available, anyone is free to make compatible devices. And they already have. For example, Seeeduino ADK Provides Inexpensive Android Open Accessory Kit Alternative for $80.


Collatz conjecture proven?:

A while back (see Collatz conjecture and binary palindromes) we looked at patterns in the 3n + 1 problem. A while back Paul Erdős famously said of the Collatz conjecture that "Mathematics is not yet ready for such problems". Now Gerhard Opfer, a (former) student of Lothar Collatz, has submitted a paper purporting to prove the conjecture. I don't understand the methods or the proof, and it hasn't been peer reviewed yet. Here's a brief news item about the result, "Deceptive puzzle may be solved after 74 years". 



Sissy's Magical Ponycorn Adventure


Spherical cat in space (or Lady's lunar emulation):



Tuesday, May 10, 2011

Programmer Anonymous notes, 111

"Marry Skype's software with the Xbox Kinect and an HD television set, and Microsoft can make a powerful argument for getting into millions of living rooms." Tim Webber commenting on Microsoft's acquisition of Skype.


A lattice on a tesseract with a dimension of 4x4x4x4 has 256 points. If each point is associate with a data value (1 bit), 256 bits are accomodated. Tesseracts have 24 cubic faces, in this case with a dimension of 4x4x4. If a parity bit is assigned to every 4-tuple  in the tesseract (12 cubes of parity bits, one cube for each pair of faces), there will be 6*4^3 = 768 parity bits. So 1024 bits can encode 256 data bits with the ability to detect and correct many transmission errors. How many? This is not as efficient as some Hamming codes, but its structure is beautiful.

Saturday, May 7, 2011

Programmer Anonymous notes, 110

ilearnedtoprogram.com - Share the Story of Why and How You Learned to Program

...
"in college, when I realized I could have a big impact on the world by building software with a focus on humanity." - Vanessa Hurst
"because I believed (and still do) that it's one of the best skills to have in order to change people's lives for the better." Alyssa Daw
"when I wanted a piano and got an Atari -- programmed the keys to make sounds." Katie A. Siek
"because I wanted to make interactive electronic literature." - Zuzana Husarova
"because I wanted to build cool things. Now that's my job." - Alex Gaynor
...

Multiplayer High, by Douglas Thomas and John Seely Brown

Listen to anyone talk about schools today: classical education just can't keep up. In the digital generation's world of constant change, most schooling is profoundly boring. But what else is possible?

About this essay,  jdparadise comments:
So, in a word, give or take: Montessori.

Shannon's Law, by Cory Doctorow
A recent short story for anthology. Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electronic engineer, and cryptographer known as "the father of information theory".[1]

The title is a reference to the Shannon–Hartley theorem:
...a bound on the maximum amount of error-free digital data (that is, information) that can be transmitted with a specified bandwidth in the presence of the noise interference ...
The story includes a short cryptogram, and a shorter cryptic numerical puzzle (not the same thing). I have reason to believe (aesthetic reason) both wouldn't be hard to crack.

Doctorow has also written about metacrap.


Alan Turing Statue Passes Dog's Turing Test:
Border collie is baffled by stubborn old man who refuses to play. Said old man is actually a statue of Alan Turing, which has seemingly passed the dog's own meager Turing test.



Robots learning to share


Robot Evolution: EPFL and the University of Lausanne team up to explore Hamilton's law of kin selection. Published in PLoS on May 3rd, 2011, biology and robotics come together to help understand how altruistic genes get passed on from one generation to the next. To learn more about the labs: http://lis.epfl.ch and http://www.unil.ch/dee/page6763.html

Once the team was comfortable with the virtual evolution environment it had set up, it added a new twist: It allowed the robots to share food disks with each other. If Hamilton's hypothesis was correct, "successful" virtual robots were likely to be those that were closely related and shared food with each other; that would help to ensure that at least one of them -- and some of the genes of both--would make it to the next round. (Two robots with a modest amount of food disks would both be more likely to be cut from the simulation, but if one robot gave all of its food to a second robot, that second robot would likely make the next round.) And indeed, altruism quickly evolved in the simulation, with greater food-sharing in groups where robots were more related, the researchers report online today in PLoS Biology. The more closely related the robots, the quicker they cooperated. "It shows how general the [theory] is, whether you are an insect, a human or a robot," says Floreano.

Sunday, April 10, 2011

Programmers Anonymous notes, 101

On the obliqueness of scientific inquiry:
"Science cannot solve the ultimate mystery of nature. And that is because, in the last analysis, we ourselves are part of nature and therefore part of the mystery that we are trying to solve" Max Planck



veggieman's no sdfjklsdiocjioji comics: 
besht shtory evar
this is the cool 
ohm ygo d
Griffin's c=c+1:
Math Limericks Revisited
     - Klein bottle
     - Seven Bridges of Königsberg



Join me April 9th at UCLA for Obscura Day! I'll be hosting an event with Brad Fidler and Leonard Kleinrock in the room where the first Internet message was sent in 1969!

Come be one of the very first to rediscover the room where the Internet was born. Almost forgotten in history and used for years as an unremarkable classroom at UCLA, it will reopen as a museum this July. Get there first and stand in the very spot that the first modem sent the first message ever, and see photos and documents from those first days of the Internet that have been lost to obscurity for decades.



The course should be of interest to anyone seeking to develop robust robot software, and anyone who is interested in real-world applications of statistical theory. Students participating in this course will acquire the skill of developing robust software for robots operating in real-world environments, and understanding the mathematical underpinnings of their software. Even though this course focuses on mobile robotics, the techniques covered in this course apply to a much brooder range of embedded computer systems, equipped with sensor and actuators.
Jacob is irked by the poor website design, in particular the dynamic menu that must be chased like one is playing whack-a-mole. This is only a problem if the font size is increased from the default size, which old people like me tend to do. I would suggest he contact the professor directly, and politely, but his FAQ notes that:
I sent you information by Email - Didn't you read my mail? This has become a problem. At present, I receive between 150 and 400 personal Email messages per day. Even if I quit my job and stop sleeping, I wouldn't be able to answer all of them. I am willing to spend up to two hours a day on Email. This means that I am not even able to read the majority of my Email. I realize yours might be one of those that I am unable to read, and I hope you accept my sincere apologies. On the flip side, if I only did Email all day and nothing else, would you really want to talk to me?
He was probably using some Stanford boilerplate course page material. A more recent Thrun course page, CS221 Introduction to Artificial Intelligence, appears to have the same deficiency.

Thrun is the director of the Stanford Artificial Intelligence Laboratory (SAIL). He led the development of the robotic vehicle Stanley which won the 2005 DARPA Grand Challenge, and which is exhibited in the Smithsonian Institution's National Museum of American History.  Stanley is relatively low budget autonomous vehicle created by Stanford University's Stanford Racing Team in cooperation with the Volkswagen Electronics Research Laboratory (ERL). The DARPA  win came with 2 million dollar prize, the largest prize money in robotic history.

The PBS' NOVA show on the DARPA Grand Challenge is spine-tinglingly geeky and entertaining. The meek (drive-by-wire VW Passat vs. tricked out Hummer) shall inherit the earth.

Sunday, March 20, 2011

Progammers Anonymous notes, 100

From Register Guard, "Physics Slam gives public inside look at science":
The country’s first-ever Physics Slam will be held at 7 p.m. Tuesday in Room 150 of Columbia Hall on the University of Oregon campus.

It is free and open to the public.

Six physicists from throughout the country will compete in the event. Each will get 12 minutes to describe a topic in particle physics in a way that the audience can understand.
...
The Physics Slam winner will be chosen by popular acclaim, as determined by applause.
And because these are scientists, they’re not taking anyone’s word for who gets the most.
The judge will be a scientific instrument, a decibel meter, that will determine whose presentation is the biggest hit.

The event is being held in conjunction with the 2011 Linear Collider Workshop of the Americas, which is being hosted by UO physics professor Jim Brau and the UO’s Center for High Energy Physics. It will bring more than 200 scientists to Eugene to discuss the design and experiments for a future linear particle collider.
[Post mortem: Huge crowd, overflow room was overflowing. More than the usual problems plugging in video. I found the talks much less than inspiring. Pie charts? Stock stereotypes of scientists? Sexist metaphors along side old and tired metaphors? Really? There's some great stuff in modern physics, but I didn't see anyone show it off.]


We talked a bit about Fermat's principle and quantum electrodynamics (QED) related to the Many Worlds interpretation of quantum mechanics. Sec Narf mentioned a story in which dreams changed reality, "The Lathe of Heaven" (1971), by Ursula Le Guin. Le Guin, one of the most influential science fiction and fantasy authors, has lived in Portland OR since 1958, and The Lathe of Heaven takes place in Portland:
George Orr, a draftsman, has long been abusing drugs to prevent himself from having "effective" dreams, which retroactively change reality. After having one of these dreams, the new reality is the only reality for everyone else, but George retains memory of the previous reality. Under threat of being placed in an asylum, Orr is forced to undergo "voluntary" psychiatric care for his drug abuse.
George begins attending therapy sessions with an ambitious psychiatrist and sleep researcher named William Haber. Orr claims that he has the power to dream "effectively" and Haber, gradually coming to believe it, seeks to use George's power to change the world. His experiments with a biofeedback/EEG machine, nicknamed the Augmentor, enhance Orr's abilities and produce a series of increasingly intolerable alternate worlds, based on an assortment of utopian (and dystopian) premises familiar from other science fiction works.

The notion of using EEG for biofeedback (neurofeedback), and even for direct control of physical objects (brain-computer interface), has recently seen a resurgence of interest.




Noam Chomsky will speak at the University of Oregon:
“Global Hegemony: The Facts, The Images.”, Wednesday, April 20, 7 p.m. in Columbia 150, free and open to the public.
Chomsky is an interesting speaker on a wide range of topics, from philosophy to the nitty-gritty of foreign policy. While recently his focus is usually on politics from a libertarian socialist viewpoint, his linguistic work has had a direct effect on psychology, computing, computer languages, and the notion of computability.
In the 1950s, Chomsky began developing his theory of generative grammar, which has undergone numerous revisions and has had a profound influence on linguistics. His approach to the study of language emphasizes "an innate set of linguistic principles shared by all humans" known as universal grammar, "the initial state of the language learner," and discovering an "account for linguistic variation via the most general possible mechanisms."[9]
For Chomsky, linguistics is a branch of cognitive psychology; and genuine insights in linguistics imply concomitant understandings of aspects of mental processing and human nature. His theory of a universal grammar was seen by many as a direct challenge to the established behaviorist theories of the time and had major consequences for understanding how children learn language and what, exactly, the ability to use language is.
Computer languages are now understood as parts of the Chomsky hierarchy, which partitions formal grammars into classes, or groups, with increasing expressive power, i.e., each successive class can generate a broader set of formal languages than the one before.
Computer scientist Donald Knuth admits to reading Syntactic Structures during his honeymoon and being greatly influenced by it. "…I must admit to taking a copy of Noam Chomsky's Syntactic Structures along with me on my honeymoon in 1961 … Here was a marvelous thing: a mathematical theory of language in which I could use a computer programmer's intuition!".

Jacob's no sdfjklsdiocjiojc "these was them days" (comic).


"If I were again beginning my studies, I would follow the advice of Plato and start with mathematics." Galileo Galilei

Abstruse Goose, "The Frontier" (comic). Every one of Abstruse Goose is worth a read.

Friday, March 11, 2011

Large earthquakes happen in the Pacific Northwest

They aren't frequent, only about once every 300-900 years, but it's been a while since the last megathrust earthquake. Smaller earthquakes near population centers also can cause devastation. It takes decades of attention, building, planning and practice to mitigate risks for large unusual events.

Pacific Northwest Faces Nearly Identical Risks to Japanese Quake

"This is an earthquake of the same type, with about the same magnitude and proximity that we face here in the Pacific Northwest from the Cascadia Subduction Zone. What you are seeing in Japan today is what you will also see in our future. Except they are better prepared than we are." Robert Yeats, Oregon State University