His work has been exhibited nationally and internationally and continues to gain recognition. Recent exhibitions include Homo Virtualis [Porto Santo Biennial] in Porto Santo, Portugal; Tek'tanik at Art Guild New Jersey and Gallery Affero in Newark, New Jersey; Digital Landscapes at the TMG Gallery in Guarda, Portugal; Digital Fringe at the Melbourne Fringe Festival in Melbourne, Australia; Fauna Show at The Workshop Gallery in Bialystok, Poland; NanoArt21, Passion for Knowledge in San Sebastian, Spain; Origins at the Fox Art Gallery in Philadelphia, Pennsylvania; Snap To Grid at the Los Angeles Center for Digital Art in Los Angeles, California; The Human Canvas at The Center for Fine Art Photography in Fort Collins, Colorado; Virtual Worlds at the UAVM; Virtual Humanities at the Icone Gallery in Coimbra, Portugal; SMart Festival at Open Concepts Gallery in Grand Rapids, MI; and Retro Futurism at SpaceCamp Gallery in Indianapolis, IN.
In 2008, Patrick began to show his work inside the virtual simulation world Second Life; exhibitions that advance beyond two-dimension work and expand his ideas of simulation, virtual reality, and the synthetic future where the physical object gives way to its virtual counterpart and its presence is valued entirely for its idea rather than its place in space.
This transition toward a more prominent virtual presence as an artist eventually led to the inevitable. In 2009, shortly after becoming a regular exhibitor in the virtual environment, Patrick embarked upon his first photographic series that used the environment and society of Second Life as its subject matter and conceptual theme. Virtual Lens is an artistic and anthropological investigation into the life of the avatar, landscape of the sim environment, and experience of the virtual world. Patrick continues to photograph and exhibit his portfolios as well as spend time with fellow avatars in Second Life.
2010 brought a new role for Patrick as the curator of several exhibitions. He has curated exhibitions for The VASA Project's Online Gallery and Turing Gallery in Second Life that reflect upon digital culture in the world today. Topics such as biotechnology, nanotechnology, virtual reality, artificial intelligence, robotics, renewable energies, gene therapy, cyber culture, and other posthuman and transhuman philosophies are the focus of these exhibitions.
During the month of June, 2010 Patrick was artist in residence at the Biosphere 2. During his time in residence he began work on photographic, sound, and digital media portfolios. These efforts have yielded a fully developed photographic portfolio of the Biosphere 2 structure and an album to be released on Innova Recordings in 2011. The unique condition of Biosphere 2 attracted Patrick to the residency. As a natural environment that was hermetically sealed and self-sustaining while simultaneously being powered by more than two acres of machinery, the B2 environment played on Patrick's continuing theme of organic and synthetic mergers.
Patrick received a Bachelor of Arts degree in photography from Grand Valley State University and a Master in Fine Arts degree for photography from the Savannah College of Art and Design.
He currently works as Assistant Professor of Photography at Point Park University in Pittsburgh and as an instructor for The Vasa Project’s online workshops.
A Brief Overview of the Historical Path of Automata and the Tendency Toward Human Replication and Re-formation
The human desire to replicate and better its own and to artificially create life goes back to the beginning of the development of our species. Jasia Reichardt begins her history of automata by naming the first creator or source as God and the first automaton as Adam.
“And the Lord God formed man of the dust of the ground and breathed into his nostrils the breath of life; and man became a living soul.”
Reichardt’s history goes on to include creators such as Prometheus, Hephaestus, Pygmalion, Amenhotep, Deadelus, King-shu Tse, Archytas of Tarentum, and Ctesibius straight through human history to more modern individuals such as Roger Bacon, Leonardo da Vinci, Rene Descartes, and Thomas Alva Edison. In every creator we see another example of crafting automaton.
Roger Bacon (1219-94) allegedly perfected an automaton known as the Speaking Head after working on it for seven years. The head was watched for three weeks in anticipation of its first words. When none were spoken, the head was then handed over to an attendant who was instructed to alert Bacon at first sound of words coming from the head. It is said that the first words uttered by the head were “Time is,” which, to the attendant, seemed unimportant and not worthy of calling to Bacon’s attention. Shortly after, the head spoke the words “Time was.” Again the attendant did not notify Bacon. A half hour later, the head then spoke the words “Time has past,” at which point the head collapsed.
Throughout this historical journey, we constantly see inventors creating automaton that replicate the actions of human beings. Edison’s talking dolls invented in 1891 to advertise his phonograph, Rene Descartes’ dancing doll from 1640 that could mimic the motions of human beings (it is said to have been a girl doll to replace his daughter who died at the age of five), or Baron Wolfgang von Kampelen’s 1769 invention, the chess player (a.k.a. The Turk) are all prime examples of early automata creations.
None of these creations were actually investigated scientifically, and thus were not proved to be in working order. Roger Bacon’s head was never actually proved to have spoken the words that the story tells. The fact remains, however, that such heads were being created and rumors circulated for centuries about heads that were able to talk. The thirteenth-century friar and priest Albertus Magnus, famous for his advocacy of a coexistence of science and religion, was said to have used alchemy to make one of these heads. After being found by his disciple, Thomas Aquinas, it goes that Aquinas smashed the head.
Kampelen’s Chess Player was able to beat great chess players of the time, but was actually operated by a human being from within the body of the automata (fig. 2). As Sidney Perkowitz explains:
We would be right to doubt that eighteenth-century technology mimicked the human brain, because the Turk was a hoax. A human hidden inside the cabinet manipulated the figure’s hand to move the chess pieces, as Poe and others surmised. Nevertheless, the Turk teaches us a lesson in how artificial beings affect people, because over its long history, many believed it could play a meaningful game of chess. Apparently we are willing to meet artificial beings halfway, mentally filling in the blanks between what they present and what we want to believe. Perhaps if the chess player had been displayed only as a collection of gears without a human form, viewers would have found it less believable, although the machinery might have impressed them.
Automata show that the yearning for human companionship goes beyond just a need for human-to-human relationships, but to a more evolved form of a different species. Evolution could not provide much more than humanity in its 4.6 billion years of work, so humanity has worked up possibilities for itself in the past six thousand.
In the past century these possibilities have far exceeded what was thought possible during the times of early automata. The advent of electrically powered automata, or robots (a term first coined by Karel Capek in his play R.U.R. in 1921), came with an exponential growth of technology and its implementation into automata and robotics in the 20th century.
Many institutions in the academic, government, and corporate worlds have had a hand in the action of building advanced mobile and cognitive robots (MIT, Carnegie Mellon University, Honda, Sony, DARPA). Two high caliber robots worth noting are ASIMO (Advanced Step in Innovative Mobility) (fig. 3), developed by Honda, and Kismet (fig. 4), developed by Cynthia Breazeal (a graduate student of the Rodney Brooks group at MIT). These two robots were originally developed for two distinct goals: ASIMO to be a successful mobile/walking robot and Kismet to be a socially intelligent robot.
It took a few dozen engineers and more than a decade to develop the walking capabilities of ASIMO. These bi-pedal maneuvers allow for mobility on steep inclines, steps, on one leg, backward, and while turning; all while keeping balance. The ease with which this robot can travel was a giant leap for robotics; it continues to advance year after year to include more human-like rotation and operation. Though not able to speak as well as a robot like Kismet, ASIMO is able to listen and follow directions from a human being. It is also equipped with vision to allow for an understanding of gestures such as pointing in a certain direction, holding a hand up (like a crossing guard’s stop gesture), or give a hand shake when a person extends his or her hand. Honda’s goal is to make robots that have a practical use in society. ASIMO also has a facial recognition program built in, so when a person passes by the robot it takes his or her facial image and applies the appropriate name (and whatever other specific information it has) to be able to greet people individually. With the abilities that Honda has given to ASIMO it is clear that the robot is already capable of performing such roles as hostess, receptionist, postal worker, bank teller, etc. Though this may incite Luddite reactions from some, it could provide great help to human beings with daily tasks.
Kismet is unable to walk like ASIMO, but the relationship that Kismet builds with its human companions is far exceeding. The goal with Kismet was to develop a more social robot that could provide and require interpersonal relationships with human beings. Kismet is equipped with emotional needs and responses (i.e. happy, sad, excited, lonely, threatened, interested) for full interaction with its human friends. If, for instance, a social interaction becomes too close (a human comes too close to Kismet’s face), Kismet will withdraw with either a threatened response or a sleep response where it simply closes its eyes. Reactions are given to any interaction appropriately. When Kismet is scolded for misbehaving it will give a sad response much like a child. The premise, in fact, is to make Kismet a behavioral learning system, akin to the way in which children learn from adults. Human response toward Kismet is much more heartfelt than response towards Asimo because of Kismet’s advanced interpersonal relationship programming. If a human yells at Kismet, for instance, the reaction will likely make the human sympathetic to having hurt the feelings of Kismet. Likewise, subjects seem to enjoy having social interactions that result in providing Kismet with joy and social connectivity. After all, Kismet is designed to show the feeling of loneliness if it goes for long periods of time without social interactions with human beings.
The beauty of ASIMO and Kismet is that they represent some of the most advanced methods of mobility and socialization to date. Moreover, they represent an innate human urge to produce life-like equal beings to socialize and live with. The fact that these robots are becoming so human-like at such a fast pace also adds to the social and political separation between humans and cyborgs, androids (machine built to appear human), and robots. As the abilities of these entities increase, the line between them and us will decrease. Technologies that are being used in such robots are also being advanced for placement into the bodies of human beings. Soon we will also have a shared physical makeup with these robots, with some of us becoming robosapiens.
One final creation that blurs the line between the looks of humans and robots is the android Jules (fig. 5) that has been developed by Hanson Robotics. Hanson Robotics uses advanced software development and materials to create life-like ‘…revolutionary, interactive bio-inspired conversational robots.’ This human counterpart has David Hanson’s frubber skin product (which appears absolutely human), strong artificial intelligence, accurate human emotions and motions, and excellent communication skills. Amazingly, the robot learns as it ages (similar to Kismet) through communication with those around it. A conversation held with Jules is not unlike one you may have with any stranger on the street, with only an occasional kink that would commit to a failure of the Turing Test.
All of these advances are on the pattern of technological exponential growth as defined by Moore’s Law . Ray Kurzweil bases his predictions of technological triumph on this law when he predicts:
Sometime early in this century the intelligence of machines will exceed that of humans. Within a quarter of a century, machines will exhibit the full range of human intellect, emotions and skills, ranging from musical and other creative aptitudes to physical movement. They will claim to have feelings, and, unlike today’s virtual personalities, will be very convincing when they tell us so. By around 2020 a $1,000 computer will at least match the processing power of the human brain. By 2029 the software for intelligence will have been largely mastered, and the average personal computer will be equivalent to 1,000 brains.
Figure 2: An explanatory illustration of how The Turk may have been controlled by a human from inside the chest.
Figure 3: Honda's walking robot Asimo
Figure 4: Kismet giving a rather human-like expression
Figure 5: Jules showing off his frubber skin and lifelike expressions.
*excerpt from Formatting Gaia: A Comprehensive Outline of the Photographic Work
The Army also began to use virtual reality / video game technology as a promotional tool aimed at young kids beginning early this century. Those promotional games became so popular and so accurate that they eventually became a tool used for training soldiers in the US Army:
With more than 8 million registered users, the Army-developed video game "America's Army" is an interactive, first-person shooting game that gives civilians a taste of the soldier's life.
However, what debuted in 2002 as a promotional tool for the Army has evolved into a practical training tool for soldiers.
At Picatinny Arsenal, the Armament Research, Development and Engineering Center has used the game to develop training simulators to help familiarize soldiers with the robots used to detonate improvised explosive devices.
Though they had been doing simulation for a long time, when Picatinny simulation engineers saw the game they wanted to put that level of detail into their graphics simulation, said Brad Drake, computer engineer and team leader for America's Army -- Picatinny.
-Picatinny video game helps train soldiers [http://www.dailyrecord.com/apps/pbcs.dll/article?AID=/20080829/COMMUNITIES/808290339]
I suppose the biggest downfall here is that wars themselves can not be fought virtually as well.
‘And, for an instant, she stared directly into those soft blue eyes and knew, with an instinctive mammalian certainty, that the exceedingly rich were no longer even remotely human.’ —William Gibson, Count Zero
It is inevitable that the post-human technologies of the future and elements of the cyborg culture, such as bio-engineering, life-prolongment, and neural upgrades will be sought after and dominated by the extremely wealthy from their inception. The availability of such human evolutionary extensions will be of extreme value and in high demand when they first hit the market. The cost for better memory, replacement organs, neural implants, and eternal life will be like that of any other new technology that comes along: prohibitive. Our economic structure will be unable to sidestep the fact that the low-income human being will not have access to the same biotechnological enhancements as the wealthy.
What this potentially means for the poor is that they will be on the trailing end of the herd as they scramble for survival. With limited money available for body-tech upgrades, parents will be unable to fund their own or their children's future stability in a culture inundated by such requirements. The upper classes will more likely be able to afford to supply themselves with the technologies they need to live an educated, connected, and involved life. In the future, they will obtain those virtues as well as a superior biological system, advanced neural computational abilities, and ability to be serviced medically in ways that the lower classes will not. The lower class, struggling to provide for their loved ones, will fall off first as an outdated life form, unable to compete with the advances made by their wealthier contemporaries. Ramez Naam talks about the possibility of these social class differences in response to genetic therapy and manipulation in his book, More Than Human:
Inequality in access to enhancement technologies brings the risk of stratification to the rich from the rest of the population. Some enhancements, like learning ability or memory, will increase earning ability. If the rich are able to buy these enhancements and the poor cannot, then the rich will be increasingly advantaged and the poor will fall ever farther behind. For the rich, this would be a virtuous cycle of gains begetting more gains. For the poor, it would be a vicious cycle, as lack of access to enhancements prevented access to the best jobs, thus robbing them of the money they need to buy enhancements.
Possibilities always exist for a way of survival, even when financially and socially limited. Lower classes may learn to re-work the broken or obsolete technologies that have been discarded by the rich, but reaching the upper tier of progression will be unlikely. In the work of Formatting Gaia, the subjects are depicted as being powered by and interacting with configurations made of discarded materials. This is unlike the forms often portrayed in sci-fi visions of techno-culture. The work encompasses a subtlety of timelessness that offers both a sense of the future and the recent past that sets up a fantastical or mythical world.
The image Evening Reboot (fig. 1), for instance, presents to us the image of a body that has been wired up to a module of what appears to be a regurgitation of electronic circuitry. This image portrays an individual who has been forced to work with what they were given in life—in this instance, less than satisfactory equipment—in order to technologically orient her body away from the less efficient biological artifact it was born as. Becoming a more efficient individual is a constant point of measuring success and providing reason for these body changes, so they are always to be viewed as a necessity to merely keep up with the evolving form of homo sapiens.
Formatting Gaia depicts human beings who have confronted technology head on and made not only the accommodations for co-existence within the realm of human society but suggest a full integration. External operators such as modern robotics are a thing of the past by the time this world has evolved. Existence is now experienced through the embodiment of technologies that provide a world surpassing the notion of life today, with highly heightened senses and virtual worlds that go beyond those of our current understanding. These changes will not just re-configure our day-to-day experience, but will also open up philosophical debate about the same old questions, only with a newly ordered rendition of the world. These narratives become crucial to allowing the viewer to fully comprehend the suggested possibilities and scenarios.
Figure 1: Evening Reboot
*excerpt from Formatting Gaia: A Comprehensive Outline of the Photographic Work
We live in a time of tremendous change. Through technology, we are re-formatting life processes and impacting what will eventually be needed for survival. In the photographic work of Formatting Gaia and in the following pages I will investigate the nature of these changes, some that are already visible though commonly overlooked, and the possible implication these alterations hold for the 21st century and beyond.
There are many ideological fronts under attack as we enter an increasingly technological social stratum. These include our understanding of biology, philosophy, sociology, psychology, and humanity itself. There will be heated debate as we begin to integrate into the body new technologies, genetically enhance embryos for birth, use DNA structure for cloning, and make amendments to the physical body by adding superior senses (sight, smell, taste, touch, and hearing) or muscular and mental capabilities. Such drastic shifts will call into question everything we’ve held to be safely true or have come to understand in contemporary civilization.
*excerpt from Formatting Gaia: A Comprehensive Outline of the Photographic Work
At the Neuroscience 2011 conference, scientists at The Rockefeller University, The Scripps Research Institute, and the University of Pennsylvania presented new research demonstrating the impact that life experiences can have on genes and behavior. The studies examine how such environmental information can be transmitted from one generation to the next — a phenomenon known as epigenetics. This new knowledge could ultimately improve understanding of brain plasticity, the cognitive benefits of motherhood, and how a parent‘s exposure to drugs, alcohol, and stress can alter brain development and behavior in their offspring.
The new findings show that:
- Brain cell activation changes a protein involved in turning genes on and off, suggesting the protein may play a role in brain plasticity.
- Prenatal exposure to amphetamines and alcohol produces abnormal numbers of chromosomes in fetal mouse brains. The findings suggest these abnormal counts may contribute to the developmental defects seen in children exposed to drugs and alcohol in utero.
- Cocaine-induced changes in the brain may be inheritable. Sons of male rats exposed to cocaine are resistant to the rewarding effects of the drug.
- Motherhood protects female mice against some of the negative effects of stress.
- Mice conceived through breeding — but not those conceived through reproductive technologies — show anxiety-like and depressive-like behaviors similar to their fathers. The findings call into question how these behaviors are transmitted across generations.
Source | Kurzweil AI
A robot that can control both its own arm and a person’s arm to manipulate objects in a collaborative manner has been developed by Montpellier Laboratory of Informatics, Robotics, and Microelectronics (LIRMM) researchers, IEEE Spectrum Automation reports.
The robot controls the human limb by sending small electrical currents to electrodes taped to the person’s forearm and biceps, which allows the robot to command the elbow and hand to move. In the experiment, the person holds a ball, and the robot holds a hoop; the robot, a small humanoid, has to coordinate the movement of both human and robot arms to successfully drop the ball through the hoop.
The researchers say their goal is to develop robotic technologies that can help people suffering from paralysis and other disabilities to regain some of their motor skills.
Source | Kurzweil AI
Four Wave Gliders — self propelled robots, each about the size of a dolphin — left San Francisco on Nov. 17 for a 60,000 kilometer journey, IEEE Spectrum Automation reports.
Built by Liquid Robotics, the robots will use waves to power their propulsion systems and the Sun to power the sensors, as a capability demonstration. They will be measuring things like water salinity, temperature, clarity, and oxygen content; collecting weather data, and gathering information on wave features and currents.
The data from the fleet of robots is being streamed via the Iridium satellite network and made freely available on Google Earth’s Ocean Showcase.
Source | Kurzweil AI
“Someday in the near future, quadriplegic patients will take advantage of this technology not only to move their arms and hands and to walk again, but also to sense the texture of objects placed in their hands, or experience the nuances of the terrain on which they stroll with the help of a wearable robotic exoskeleton,” said study leader Miguel Nicolelis, MD, PhD, professor of neurobiology at Duke University Medical Center and co-director of the Duke Center for Neuroengineering.
Sensing textures of virtual objects
Without moving any part of their real bodies, the monkeys used their electrical brain activity to direct the virtual hands of an avatar to the surface of virtual objects and differentiate their textures. Although the virtual objects employed in this study were visually identical, they were designed to have different artificial textures that could only be detected if the animals explored them with virtual hands controlled directly by their brain’s electrical activity.
The texture of the virtual objects was expressed as a pattern of electrical signals transmitted to the monkeys’ brains. Three different electrical patterns corresponded to each of three different object textures.
Because no part of the animal’s real body was involved in the operation of this brain-machine-brain interface, these experiments suggest that in the future, patients who were severely paralyzed due to a spinal cord lesion may take advantage of this technology to regain mobility and also to have their sense of touch restored, said Nicolelis.
First bidirectional link between brain and virtual body
“This is the first demonstration of a brain-machine-brain interface (BMBI) that establishes a direct, bidirectional link between a brain and a virtual body,” Nicolelis said.
“In this BMBI, the virtual body is controlled directly by the animal’s brain activity, while its virtual hand generates tactile feedback information that is signaled via direct electrical microstimulation of another region of the animal’s cortex. We hope that in the next few years this technology could help to restore a more autonomous life to many patients who are currently locked in without being able to move or experience any tactile sensation of the surrounding world,” Nicolelis said.
“This is also the first time we’ve observed a brain controlling a virtual arm that explores objects while the brain simultaneously receives electrical feedback signals that describe the fine texture of objects ‘touched’ by the monkey’s newly acquired virtual hand.
“Such an interaction between the brain and a virtual avatar was totally independent of the animal’s real body, because the animals did not move their real arms and hands, nor did they use their real skin to touch the objects and identify their texture. It’s almost like creating a new sensory channel through which the brain can resume processing information that cannot reach it anymore through the real body and peripheral nerves.”
The combined electrical activity of populations of 50 to 200 neurons in the monkey’s motor cortex controlled the steering of the avatar arm, while thousands of neurons in the primary tactile cortex were simultaneously receiving continuous electrical feedback from the virtual hand’s palm that let the monkey discriminate between objects, based on their texture alone.
Robotic exoskeleton for paralyzed patients
“The remarkable success with non-human primates is what makes us believe that humans could accomplish the same task much more easily in the near future,” Nicolelis said.
The findings provide further evidence that it may be possible to create a robotic exoskeleton that severely paralyzed patients could wear in order to explore and receive feedback from the outside world, Nicolelis said. The exoskeleton would be directly controlled by the patient’s voluntary brain activity to allow the patient to move autonomously. Simultaneously, sensors distributed across the exoskeleton would generate the type of tactile feedback needed for the patient’s brain to identify the texture, shape and temperature of objects, as well as many features of the surface upon which they walk.
This overall therapeutic approach is the one chosen by the Walk Again Project, an international, non-profit consortium, established by a team of Brazilian, American, Swiss, and German scientists, which aims at restoring full-body mobility to quadriplegic patients through a brain-machine-brain interface implemented in conjunction with a full-body robotic exoskeleton.
The international scientific team recently proposed to carry out its first public demonstration of such an autonomous exoskeleton during the opening game of the 2014 FIFA Soccer World Cup that will be held in Brazil.
Ref.: Joseph E. O’Doherty, Mikhail A. Lebedev, Peter J. Ifft, Katie Z. Zhuang, Solaiman Shokur, Hannes Bleuler, and Miguel A. L. Nicolelis, Active tactile exploration using a brain–machine–brain interface, Nature, October 2011 [doi:10.1038/nature10489]
Source | KurzweilAI
When it happened, emotions flashed like lightning.
The nearby robotic hand that Tim Hemmes was controlling with his mind touched his girlfriend Katie Schaffer’s outstretched hand.
One small touch for Mr. Hemmes; one giant reach for people with disabilities.
Tears of joy flowing in an Oakland laboratory that day continued later when Mr. Hemmes toasted his and University of Pittsburgh researchers’ success at a local restaurant with two daiquiris.
A simple act for most people proved to be a major advance in two decades of research that has proven to be the stuff of science fiction.
Mr. Hemmes’ success in putting the robotic hand in the waiting hand of Ms. Schaffer, 27, of Philadelphia, represented the first time a person with quadriplegia has used his mind to control a robotic arm so masterfully.
The 30-year-old man from Connoquenessing Township, Butler County, hadn’t moved his arms, hands or legs since a motorcycle accident seven years earlier. But Mr. Hemmes had practiced six hours a day, six days a week for nearly a month to move the arm with his mind.
That successful act increases hope for people with paralysis or loss of limbs that they can feed and dress themselves and open doors, among other tasks, with a mind-controlled robotic arm. It’s also improved the prospects of wiring around spinal cord injuries to allow motionless arms and legs to function once again.
“I think the potential here is incredible,” said Dr. Michael Boninger, director of UPMC’s Rehabilitation Institute and a principal investigator in the project. “This is a breakthrough for us.”
Mr. Hemmes? They say he’s a rock star.
In a project led by Andrew Schwartz, Ph.D., a University of Pittsburgh professor of neurobiology, researchers taught a monkey how to use a robotic arm mentally to feed itself marshmallows. Electrodes had been shallowly implanted in its brain to read signals from neurons known to control arm motion.
Electrocorticography or ECoG — in which an electronic grid is surgically placed against the brain without penetration — less intrusively captures brain signals.
ECoG has been used to locate sites of seizures and do other experiments in patients with epilepsy. Those experiments were prelude to seeking a candidate with quadriplegia to test ECoG’s capability to control a robotic arm developed by Johns Hopkins University.
The still unanswered question was whether the brains of people with long-term paralysis still produced signals to move their limbs.
ECoG picks up an array of brain signals, almost like a secret code or new language, that a computer algorithm can interpret and then move a robotic arm based on the person’s intentions. It’s a simple explanation for complex science.
Mr. Hemmes’ name cropped up so many times as a potential candidate that the team called him to gauge his interest.
He said no.
He already was involved in a research in Cleveland and feared this project would interfere. But knowing they had the ideal candidate, they called back. This time he agreed, as long as it would not limit his participation in future phases of research.
Mr. Hemmes became quadriplegic July 11, 2004, apparently after a deer darted onto the roadway, causing him to swerve his motorcycle onto gravel where his shoulder hit a mailbox, sending him flying headfirst into a guardrail. The top of his helmet became impaled on a guardrail I-beam, rendering his head motionless while his body continued flying, snapping his neck at the fourth cervical vertebra.
A passer-by found him with blue lips and no signs of breathing. Mr. Hemmes was flown by rescue helicopter to UPMC Mercy and diagnosed with quadriplegia — a condition in which he had lost use of his limbs and his body below the neck or shoulders. He had to learn how to breathe on his own. His doctor told him it was worst accident he’d ever seen in which the person survived.
But after the process of adapting psychologically to quadriplegia, Mr. Hemmes chose to pursue a full life, especially after he got a device to operate a computer and another to operate a wheelchair with head motions.
Since January, he has operated the website — www.Pittsburghpitbullrescue.com — to rescue homeless pit bulls and find them new owners.
The former hockey player’s competitive spirit and willingness to face risk were key attributes. Elizabeth Tyler-Kabara, the UPMC neurosurgeon who would install the ECoG in Mr. Hemmes’ brain, said he had strong motivation and a vision that paralysis could be cured.
Ever since his accident, Mr. Hemmes said, he’s had the goal of hugging his daughter Jaylei, now 8. This could be the first step.
“It’s an honor that they picked me, and I feel humbled,” Mr. Hemmes said.
Mr. Hemmes underwent several hours of surgery to install the ECoG at a precise location against the brain. Wires running under the skin down to a port near his collarbone — where wires can connect to the robotic arm — caused him a stiff neck for a few days.
Two days after surgery, he began exhaustive training on mentally maneuvering a computer cursor in various directions to reach and make targets disappear. Next he learned to move the cursor diagonally before working for hours to capture targets on a three-dimensional computer.
The U.S. Food and Drug Administration allowed the trial to last only 28 days, when the ECoG is removed. The project, initially funded by UPMC, has received more than $6 million in funding from the Department of Veterans Affairs, the National Institutes of Health, and the U.S. Department of Defense’s Defense Advanced Research Projects Agency, known as DARPA.
Initially Mr. Hemmes tried thinking about flexing his arm to move the cursor. But he had better success visually grabbing the ball-shaped cursor to throw it toward a target on the screen. The “mental eye-grabbing” worked best when he was relaxed.
Soon he was capturing 15 of 16 targets and sometimes all 16 during timed sessions. The next challenge was moving the robotic arm with his mind.
The same mental processes worked, but the arm moved more slowly and in real space. But time was ticking away as the experiment approached its final days last month. With Mr. Hemmes finally moving the arm in all directions, Wei Wang — assistant professor of physical medicine and rehabilitation at Pitt’s School of Medicine who also has worked on the signalling system — stood in front of him and raised his hand.
The robotic arm that Mr. Hemmes was controlling moved with fits and starts but in time reached Dr. Wang’s upheld hand. Mr. Hemmes gave him a high five.
The big moment arrived.
Katie Schaffer stood before her boyfriend with her hand extended. “Baby,” she said encouraging him, “touch my hand.”
It took several minutes, but he raised the robotic hand and pushed it toward Ms. Schaffer until its palm finally touched hers. Tears flowed.
“It’s the first time I’ve reached out to anybody in over seven years,” Mr. Hemmes said. “I wanted to touch Katie. I never got to do that before.”
“I have tattoos, and I’m a big, strong guy,” he said in retrospect. “So if I’m going to cry, I’m going to bawl my eyes out. It was pure emotion.”
Mr. Hemmes said his accomplishments represent a first step toward “a cure for paralysis.” The research team is cautious about such statements without denying the possibility. They prefer identifying the goal of restoring function in people with disabilities.
“This was way beyond what we expected,” Dr. Tyler-Kabara said. “We really hit a home run, and I’m thrilled.”
The next phase will include up to six people tested in another 30-day trial with ECoG. A year-long trial will test the electrode array that shallowly penetrates the brain. Goals during these phases include expanding the degrees of arm motions to allow people to “pick up a grape or grasp and turn a door knob,” Dr. Tyler-Kabara said.
Anyone interested in participating should call 1-800-533-8762.
Mr. Hemmes says he will participate in future research.
“This is something big, but I’m not done yet,” he said. “I want to hug my daughter.”
A new map of the moon has uncovered a trove of areas rich in precious titanium ore, with some lunar rocks harboring 10 times as much of the stuff as rocks here on Earth do.
The map, which combined observations in visible and ultraviolet wavelengths, revealed the valuable titanium deposits. These findings could shed light on some of the mysteries of the lunar interior, and could also lay the groundwork for future mining on the moon, researchers said.
“Looking up at the moon, its surface appears painted with shades of grey — at least to the human eye,” Mark Robinson, of Arizona State University, said in a statement. “The maria appear reddish in some places and blue in others. Although subtle, these color variations tell us important things about the chemistry and evolution of the lunar surface. They indicate the titanium and iron abundance, as well as the maturity of a lunar soil.
The results of the study were presented Friday (Oct. 7) at the joint meeting of the European Planetary Science Congress and the American Astronomical Society’s Division for Planetary Sciences in Nantes, France.
Mapping the lunar surface
The map of the moon’s surface was constructed using data from NASA’s Lunar Reconnaissance Orbiter (LRO), which has been circling the moon since June 2009. The probe’s wide angle camera snapped pictures of the surface in seven different wavelengths at different resolutions.
Since specific minerals strongly reflect or absorb different parts of the electromagnetic spectrum, LRO’s instruments were able to give scientists a clearer picture of the chemical composition of the moon’s surface.
Robinson and his colleagues stitched together a mosaic using roughly 4,000 images that had been collected by the spacecraft over one month.
The researchers scanned the lunar surface and compared the brightness in the range of wavelengths from ultraviolet to visible light, picking out areas that are abundant in titanium. The scientists then cross-referenced their findings with lunar samples that were brought back to Earth from NASA’s Apollo flights and the Russian Luna missions.
These titanium-rich areas on the moon puzzled the researchers. The highest abundance of titanium in similar rocks on Earth hovers around 1 percent or less, the scientists explained. The new map shows that these troves of titanium on the moon range from about 1 percent to a little more than 10 percent.
“We still don’t really understand why we find much higher abundances of titanium on the moon compared to similar types of rocks on Earth,” Robinson said. “What the lunar titanium-richness does tell us is something about the conditions inside the moon shortly after it formed, knowledge that geochemists value for understanding the evolution of the moon.”
Valuable titanium ore
Titanium on the moon is primarily found in the mineral ilmenite, a compound that contains iron, titanium and oxygen. If humans one day mine on the moon, they could break down ilmenite to separate these elements.
Furthermore, Apollo data indicated that titanium-rich minerals are more efficient at retaining solar wind particles, such as helium and hydrogen. These gases would likely be vital resources in the construction of lunar colonies and for exploration of the moon, the researchers said. [Lunar Legacy: 45 Apollo Moon Mission Photos]
“Astronauts will want to visit places with both high scientific value and a high potential for resources that can be used to support exploration activities,” Robinson said. “Areas with high titanium provide both — a pathway to understanding the interior of the moon and potential mining resources.”
The lunar map also shows how space weather changes the surface of the moon. Charged particles from solar wind and micrometeorite impacts can change the moon’s surface materials, pulverizing rock into a fine powder and altering the chemical composition of the lunar surface.
“One of the exciting discoveries we’ve made is that the effects of weathering show up much more quickly in ultraviolet than in visible or infrared wavelengths,” study co-author Brett Denevi, of Johns Hopkins University Applied Physics Laboratory in Laurel, Md., said in a statement. “In the [Lunar Reconnaissance Orbiter Camera] ultraviolet mosaics, even craters that we thought were very young appear relatively mature. Only small, very recently formed craters show up as fresh regolith exposed on the surface.”
Source | SPACE
Mapping real-world motions to “self-animated” virtual avatars, using body tracking to communicate a wide range of gestures, helps people communicate better in virtual worlds like Second Life, says researchers from the Max Planck Institute for Biological Cybernetics and Korea University.
They conducted two experiments to investigate whether head-mounted display virtual reality is useful for researching the influence of body gestures in communication; and whether body gestures are used to help in communicating the meaning of a word. Participants worked in pairs and played a communication game, where one person had to describe the meanings of words to the other.
Ref.: Trevor J. Dodds et al., Talk to the Virtual Hands: Self-Animated Avatars Improve Communication in Head-Mounted Display Virtual Environments, PLoS One, DOI: 10.1371/journal.pone.0025759 (free access)
Source | KurzweilAI