His work has been exhibited nationally and internationally and continues to gain recognition. Recent exhibitions include Homo Virtualis [Porto Santo Biennial] in Porto Santo, Portugal; Tek'tanik at Art Guild New Jersey and Gallery Affero in Newark, New Jersey; Digital Landscapes at the TMG Gallery in Guarda, Portugal; Digital Fringe at the Melbourne Fringe Festival in Melbourne, Australia; Fauna Show at The Workshop Gallery in Bialystok, Poland; NanoArt21, Passion for Knowledge in San Sebastian, Spain; Origins at the Fox Art Gallery in Philadelphia, Pennsylvania; Snap To Grid at the Los Angeles Center for Digital Art in Los Angeles, California; The Human Canvas at The Center for Fine Art Photography in Fort Collins, Colorado; Virtual Worlds at the UAVM; Virtual Humanities at the Icone Gallery in Coimbra, Portugal; SMart Festival at Open Concepts Gallery in Grand Rapids, MI; and Retro Futurism at SpaceCamp Gallery in Indianapolis, IN.
In 2008, Patrick began to show his work inside the virtual simulation world Second Life; exhibitions that advance beyond two-dimension work and expand his ideas of simulation, virtual reality, and the synthetic future where the physical object gives way to its virtual counterpart and its presence is valued entirely for its idea rather than its place in space.
This transition toward a more prominent virtual presence as an artist eventually led to the inevitable. In 2009, shortly after becoming a regular exhibitor in the virtual environment, Patrick embarked upon his first photographic series that used the environment and society of Second Life as its subject matter and conceptual theme. Virtual Lens is an artistic and anthropological investigation into the life of the avatar, landscape of the sim environment, and experience of the virtual world. Patrick continues to photograph and exhibit his portfolios as well as spend time with fellow avatars in Second Life.
2010 brought a new role for Patrick as the curator of several exhibitions. He has curated exhibitions for The VASA Project's Online Gallery and Turing Gallery in Second Life that reflect upon digital culture in the world today. Topics such as biotechnology, nanotechnology, virtual reality, artificial intelligence, robotics, renewable energies, gene therapy, cyber culture, and other posthuman and transhuman philosophies are the focus of these exhibitions.
During the month of June, 2010 Patrick was artist in residence at the Biosphere 2. During his time in residence he began work on photographic, sound, and digital media portfolios. These efforts have yielded a fully developed photographic portfolio of the Biosphere 2 structure and an album to be released on Innova Recordings in 2011. The unique condition of Biosphere 2 attracted Patrick to the residency. As a natural environment that was hermetically sealed and self-sustaining while simultaneously being powered by more than two acres of machinery, the B2 environment played on Patrick's continuing theme of organic and synthetic mergers.
Patrick received a Bachelor of Arts degree in photography from Grand Valley State University and a Master in Fine Arts degree for photography from the Savannah College of Art and Design.
He currently works as Assistant Professor of Photography at Point Park University in Pittsburgh and as an instructor for The Vasa Project’s online workshops.
Altering the body to withstand environmental conditions, aging, or bodily shortcomings is an understandable step to make. While these changes to the body are indeed a positive step towards an evolutionary improvement, they come with a certain uncanny sensation.
When we think of putting computer chips, GPS locators, or prosthetic organs and body parts into our bodies, we react squeamishly. This is similar to the unsettling feeling when watching open heart or brain surgery; we’re simply not adjusted to closely viewing what the inside of our bodies look like. We also get this feeling from the introduction of foreign materials into the body. Finding it a bit unnerving to be headed into an appointment for the purpose of embedding a device would be a valid response to have, as it is a change from the constant that we have been historically programmed to find comfortable.
The uncanny valley, developed by Masahiro Mori (fig. 7), depicts the moment at which these unnerving feelings and awkward sensations occur. We can make note that the less familiarity we have with the object, the more uncanny it will be to us. Interestingly enough, the graph here shows that we are more familiar with the industrial and humanoid robot than the corpse or zombie figure already, despite having been aware of death and the idea of life after death for centuries. The uncanny valley seems to reach its peak not when we are introduced to the purely mechanic form of moving automata such as the robot or computer-based artificial intelligence, but rather when the technological alteration comes directly in contact with the human body itself. Thus we see the prosthetic hand in the same vicinity as a corpse or a zombie.
As familiarity goes up on such implants and modes of living in the post-human world, this notion of the uncanny and the squeamish reaction that comes from such subject matter then will dissipate into everyday acceptance. It will no longer seem out of place to see individuals walking down the street with telecommunications accessible directly from the limbs. Enhanced muscular ability for athletes to compete at levels steroids could never reach, or Blue Tooth connections being transmitted wirelessly from our very clothing in order to start a car or even communicate with others without using speech.
These possibilities are already in the research and development stages. Researchers in California have developed an artificial muscle that has the ability to heal itself as well as produce electricity. This new muscular technology was recently written about in Discovery News:
The research, parts of which are already being used in Japan to generate electricity from ocean waves, could be used to make walking robots, develop better prosthetics, or even charge your iPod.
“We’ve made an artificial muscle that, when you apply electricity to it, it expands” more than 200 percent, said Qibing Pei, a scientist at the University of California, Los Angeles and study author. “The motion and energy is a lot like human muscles.”
Artificial muscles have been around for years but have essentially hamstrung themselves. Some artificial muscles get so big they tear, developing uneven film thickness and random particles that cause muscle failure.
The researchers used flexible, ever-more ubiquitous carbon nanotubes as electrodes instead of other films, often metal-based, that fail after repeated use.
If an area of the carbon nanotube fails, the region around it seals itself by becoming non-conductive and prevents the fault from spreading to other areas.
“During long-term tests with the new device the actual material experiences a number of events but still worked,” said Pei.
By “events” Pei actually means they stabbed the artificial muscle with pins. Any other artificial muscle would have failed, but their model kept operating.
The self-healing muscle is also energy efficient.
“It conserves about 70 percent of the energy you put into it,” said Pei.
As the material contracts after an expansion the rearranging of the carbon nanotubes generates a small electric current that can be captured and used to power another expansion or stored in a battery.
Scientists in Japan charge batteries from ocean waves using the same idea. Other scientists have speculated that the artificial muscle could be used to capture wind energy.
“The way he’s put these carbon nanotubes together is really quite innovative,” said Kwang Kim, a material scientist at the University of Reno who was not involved in the research. “Some people want to use this to charge their batteries.”
The research appeared in the January issue of Advanced Materials.
Artists are also working with technology in ways that help predict how these changes will occur. The possibilities for globalized communication, closely knit society, and the breaking down of personal barriers may be the result of wearable media.
Steve Mann is a professor of electrical and computer engineering at the University of Toronto and has been wearing a computer since the 1970s (fig. 8). Mann’s use of wearables has enabled him to expand his communications, customize his vision for personal use, and develop more interpersonal relationships with media sharing communities. Because Mann has been living with external media attached to his body for so long, and through the continual upgrades he has seen made possible since the 80s, he has been described by many as the world’s first cyborg. In 1998 Mann said of his wearables:
Wearable computing facilitates a new form of human-computer interaction comprising of a small body-worn computer (e.g. user-programmable device) that is always on and always ready and accessible. In this regard, the new computational framework differs from that of handheld devices, laptop computers, and personal digital assistants (PDAs). The ‘always ready’ capability leads to a new form of synergy between human and computer, characterized by long-term adaptation through constancy of user-interface.
One of the new equipment concepts that Mann uses in his interface is the vitrionic contact lens. This technology takes in the field of vision and light through an eyepiece and interprets it via a processor before it is re-projected from the glasses onto the lens of the eye. Light is processed to Mann’s specifications and then resynthesized it as virtual light to be seen by his eye. One key element to the vitrionic lenses is that they provide a depth of field like that of a camera. This allows for the eye to forego focusing in on one specific object in the field of vision. It also allows for someone who wears prescription glasses to be able to wear the vitrionic glasses instead and see just as well.
With the depth of field set so that all information is in focus, the viewer can now add information into their field of vision at the same depth as subject matter in the scene. Options of what information is projected into your vision at this point are limitless. Mann, like ASIMO, also uses a wearable face recognizer, which inserts a virtual name tag, and a video orbit tracker, which allows him to add layers on top of objects in the field of vision or to delete them completely. Mann finds the orbit tracker particularly useful:
…with billboards and other visual detritus that invades our personal space. I refer to this as “real world spam” and it can be deleted from your vision field if you need to make room for other material. If you are driving to your friends house you might see messages on these billboards that become directions on how to get there - customized messages, that only you see.
Or we can have customized messages that a small community sees - shared messages. Like leaving a message for my wife on the front of a shop, saying “I was here, check out this special.” So that message can replace spam.
Mann calls these tactics packet filters, filters that are defined in the user interface and that can be modified by the wearer to fit his or her specific needs and desires.
These images may also be shared with other people via the Internet. For instance, a person at the store picking out a birthday card for his father in-law and unsure of which to choose would be able to share his field of vision via computer with his wife at home. She would then be able to draw a circle around her choice, and this would appear in the man’s vitrionic lenses. The ability to share one’s vision with anyone extends the possibilities for interaction. When asked if this is a form of collective consciousness Mann replied:
Well, from time to time I’ve recognized people I’ve never met before, because somebody on my Web site looking out through my eye sends me a message saying, “Please say hello to this person standing in front of you, who is an old high school buddy of mine.” So this touches on the whole notion of a collective consciousness.
Figure 7: A diagram showing the region coined by Masahiro Mori as the uncanny valley.
Figure 8: Steve Mann has been experimenting for decades with wearables and making them more comfortable for everyday use.
*excerpt from Formatting Gaia: A Comprehensive Outline of the Photographic Work
With every tool man is perfecting his own organs, whether motor or sensor, or is removing the limits to their functioning…Man has, as it were, become a kind of prosthetic God. When he puts on all his auxiliary organs, he is truly magnificent; but these organs have not grown on to him, and they still give him trouble at times…Future ages will bring with them new and probably unimaginable great advances in this field of civilization and will increase man’s likeness to God still more. But in the interests of our investigations, we will not forget that present-day man does not feel happy in his Godlike character.
—Sigmund Freud, Civilization and Its Discontent
For quite some time now, humans have been surrounded by a new trend that has proven to be the emergence of a new form of species. This embrace of technology and the need for embodiment is not a temporary fetish, but rather has become a strange sort of necessity for survival. By examining the culture surrounding humans, it is shocking to acknowledge the amount of technological embodiment in use today: prosthetic limbs, cochlear implants, pacemakers, contact lenses, Lasik eye surgery, gastric lap bands, Blue Tooth technologies, and mobile locative media are just a few of the integrated materials we’ve welcomed.
Many of these technologies have been embraced without the slightest resistance because of their natural branching off from previous technologies, while some engendered more negative initial responses. Cochlear implants, for instance, are still an arguable topic for many of the deaf community who believe that deafness should not be viewed as a disability, thus not treated as one. For many parents with deaf children who want to give their child the right to choose whether to have or not to have the implant later in life, it then becomes problematic for the child to learn spoken language, since young children have an aptitude for learning lingual skills in early development. Waiting for a deaf child to be of proper age to make his or her own decision eliminates years of opportune language learning.
Other technological advancements in the medical field were also seen as a threat to human beings. When vaccinations for children were first introduced, they were widely met with aversion. Now, aside from a small percentage of parents who refuse vaccination due to religious or other personal reasons, it is not only the norm for vaccinations to be given, but also a requirement for a child to be enrolled in a public school system.
While these changes to the biology of the organic body will at first seem intrusive and give a sense of the uncanny, they will soon be admitted as part of a system of normalcy and accepted behaviors. This not only occurs because of the initial steps in accepting technology, but moreso because the benefits that will be granted to the human body via these upgrades will be so monumental that they may seem irresistible. We would certainly not avoid advances in our senses such as an expanded field of vision (including the possibility to perceive ultraviolet and infrared light), a higher aptitude for hearing sounds that now lie outside of our frequency apprehension, an electronic sense of taste that could allow for us to know when a food is spoiled or harmful to our bodies, or an expanded sense of touch that heightens our notion of materials observed and our experiences of intimacy.
Again, when examining culture today, acceptance of the tech-body is emerging. For instance, on a return trip from Denver, CO in March of 2008, I sat next to a man who had in his ear a cellular Blue Tooth adaptor, allowing him to have cellular conversations without the bother of holding onto a phone (fig. 6). The man was not only conversing with the person on the other end with his Blue Tooth implant, but was also surfing the web from his handheld receiver. Not only are we becoming comfortable with machines that are applied inside our body (the man’s ear in this case), our field of comprehension is expanding farther and farther to hold simultaneous interactions with separate communication devices (the phone call and internet, in this instance).
Human beings never seem to limit themselves in expanding their experience of the world they live in, especially when it comes in the form of perceived betterment. In the future, humans will continue down this path of synthesis between the organic and inorganic in order to advance their place in the world. As evolution has shown, a more powerful, mindful, and healthful form of life will overshadow the lesser-qualified of the species. Through the application of prostheses, mind and memory upgrades, and bodily health programs, the homo-sapien will become something else, something that I like to call the techno-sapien.
Figure 6: Plane passenger immersed with Blue Tooth technologies.
*excerpt from Formatting Gaia: A Comprehensive Outline of the Photographic Work
Indeed we were talking about control, from a certain angle. Primarily I was just highlighting that the Army is openly using video games to train. They are distributing the games [ie: America's Army] to be used as training mechanisms.
I think your point about violence being present in other forms of play [ie: football, hockey, wrestling] is completely valid. Human nature perhaps compels us to fight, compete, argue, challenge, etc. but those actions are delivered with an intention of brotherhood that ends in a respect for one another in the competition [as so many competitors give voice to] opposed to death and carnage. So I believe there must be an acknowledgment made that these two similar, yet very different, acts of human nature can not be compared so closely due to their different reasons for initiation and their culmination. Wars end in death, games end in a hand shake.
The last thing I was trying to allude to was that video games are a detriment to younger generations. We should all know the ins and outs of this cyclical argument and for that reason feel comfortable steering our conversation away from it. Unless, that is, someone feels strongly about the matter and a need to voice their thoughts on it.
My intention, rather, was to bring to the table a notion of control [through monetary influence or technological triumph] that needs to be made intolerable as we progress into the 21st century. Though there have been many wonderful accomplishments, making killers [weather human or robotic] is not one of DARPA's finer ones.
There is a lot in your response to my post that I agree with and see as an important aspect of these technologies. I feel your perturbation is simply one of a semantic nature. The main concern being my use of the word cyborg. I use the term to a less segregating degree than you do so that it does not exclude the human from its definition:
cyborg: a fictional or hypothetical person whose physical abilities are extended beyond normal human limitations by mechanical elements built into the body.
I take the definition to include modifications that we already use (the transitions to our biological make-up which you too have alluded to) such as contact lenses, pace makers, or neural implants (already being used in the case of certain patience with Parkinson's disease).
That stated, it is my hope that, weather we want to dictate the changes going on to the human body as those of a cyborg, transhuman, or posthuman sort, the same acknowledgment is being made of the enormous impact being had on our evolutionary course.
Your notion that putting to rest hegemony is possible sounds brilliant. As for a practical solution to the reality of these old-world notions I, and I trust most of us here, would be fervently listening. It's not that people are saying only a select few should have access to the best things in the world, rather that it's just a human condition - the way things really are. There are the haves and the have-nots. It is my hope that through a broad use of technological change that these gaps could decrease in time as well.
At the Neuroscience 2011 conference, scientists at The Rockefeller University, The Scripps Research Institute, and the University of Pennsylvania presented new research demonstrating the impact that life experiences can have on genes and behavior. The studies examine how such environmental information can be transmitted from one generation to the next — a phenomenon known as epigenetics. This new knowledge could ultimately improve understanding of brain plasticity, the cognitive benefits of motherhood, and how a parent‘s exposure to drugs, alcohol, and stress can alter brain development and behavior in their offspring.
The new findings show that:
- Brain cell activation changes a protein involved in turning genes on and off, suggesting the protein may play a role in brain plasticity.
- Prenatal exposure to amphetamines and alcohol produces abnormal numbers of chromosomes in fetal mouse brains. The findings suggest these abnormal counts may contribute to the developmental defects seen in children exposed to drugs and alcohol in utero.
- Cocaine-induced changes in the brain may be inheritable. Sons of male rats exposed to cocaine are resistant to the rewarding effects of the drug.
- Motherhood protects female mice against some of the negative effects of stress.
- Mice conceived through breeding — but not those conceived through reproductive technologies — show anxiety-like and depressive-like behaviors similar to their fathers. The findings call into question how these behaviors are transmitted across generations.
Source | Kurzweil AI
A robot that can control both its own arm and a person’s arm to manipulate objects in a collaborative manner has been developed by Montpellier Laboratory of Informatics, Robotics, and Microelectronics (LIRMM) researchers, IEEE Spectrum Automation reports.
The robot controls the human limb by sending small electrical currents to electrodes taped to the person’s forearm and biceps, which allows the robot to command the elbow and hand to move. In the experiment, the person holds a ball, and the robot holds a hoop; the robot, a small humanoid, has to coordinate the movement of both human and robot arms to successfully drop the ball through the hoop.
The researchers say their goal is to develop robotic technologies that can help people suffering from paralysis and other disabilities to regain some of their motor skills.
Source | Kurzweil AI
Four Wave Gliders — self propelled robots, each about the size of a dolphin — left San Francisco on Nov. 17 for a 60,000 kilometer journey, IEEE Spectrum Automation reports.
Built by Liquid Robotics, the robots will use waves to power their propulsion systems and the Sun to power the sensors, as a capability demonstration. They will be measuring things like water salinity, temperature, clarity, and oxygen content; collecting weather data, and gathering information on wave features and currents.
The data from the fleet of robots is being streamed via the Iridium satellite network and made freely available on Google Earth’s Ocean Showcase.
Source | Kurzweil AI
“Someday in the near future, quadriplegic patients will take advantage of this technology not only to move their arms and hands and to walk again, but also to sense the texture of objects placed in their hands, or experience the nuances of the terrain on which they stroll with the help of a wearable robotic exoskeleton,” said study leader Miguel Nicolelis, MD, PhD, professor of neurobiology at Duke University Medical Center and co-director of the Duke Center for Neuroengineering.
Sensing textures of virtual objects
Without moving any part of their real bodies, the monkeys used their electrical brain activity to direct the virtual hands of an avatar to the surface of virtual objects and differentiate their textures. Although the virtual objects employed in this study were visually identical, they were designed to have different artificial textures that could only be detected if the animals explored them with virtual hands controlled directly by their brain’s electrical activity.
The texture of the virtual objects was expressed as a pattern of electrical signals transmitted to the monkeys’ brains. Three different electrical patterns corresponded to each of three different object textures.
Because no part of the animal’s real body was involved in the operation of this brain-machine-brain interface, these experiments suggest that in the future, patients who were severely paralyzed due to a spinal cord lesion may take advantage of this technology to regain mobility and also to have their sense of touch restored, said Nicolelis.
First bidirectional link between brain and virtual body
“This is the first demonstration of a brain-machine-brain interface (BMBI) that establishes a direct, bidirectional link between a brain and a virtual body,” Nicolelis said.
“In this BMBI, the virtual body is controlled directly by the animal’s brain activity, while its virtual hand generates tactile feedback information that is signaled via direct electrical microstimulation of another region of the animal’s cortex. We hope that in the next few years this technology could help to restore a more autonomous life to many patients who are currently locked in without being able to move or experience any tactile sensation of the surrounding world,” Nicolelis said.
“This is also the first time we’ve observed a brain controlling a virtual arm that explores objects while the brain simultaneously receives electrical feedback signals that describe the fine texture of objects ‘touched’ by the monkey’s newly acquired virtual hand.
“Such an interaction between the brain and a virtual avatar was totally independent of the animal’s real body, because the animals did not move their real arms and hands, nor did they use their real skin to touch the objects and identify their texture. It’s almost like creating a new sensory channel through which the brain can resume processing information that cannot reach it anymore through the real body and peripheral nerves.”
The combined electrical activity of populations of 50 to 200 neurons in the monkey’s motor cortex controlled the steering of the avatar arm, while thousands of neurons in the primary tactile cortex were simultaneously receiving continuous electrical feedback from the virtual hand’s palm that let the monkey discriminate between objects, based on their texture alone.
Robotic exoskeleton for paralyzed patients
“The remarkable success with non-human primates is what makes us believe that humans could accomplish the same task much more easily in the near future,” Nicolelis said.
The findings provide further evidence that it may be possible to create a robotic exoskeleton that severely paralyzed patients could wear in order to explore and receive feedback from the outside world, Nicolelis said. The exoskeleton would be directly controlled by the patient’s voluntary brain activity to allow the patient to move autonomously. Simultaneously, sensors distributed across the exoskeleton would generate the type of tactile feedback needed for the patient’s brain to identify the texture, shape and temperature of objects, as well as many features of the surface upon which they walk.
This overall therapeutic approach is the one chosen by the Walk Again Project, an international, non-profit consortium, established by a team of Brazilian, American, Swiss, and German scientists, which aims at restoring full-body mobility to quadriplegic patients through a brain-machine-brain interface implemented in conjunction with a full-body robotic exoskeleton.
The international scientific team recently proposed to carry out its first public demonstration of such an autonomous exoskeleton during the opening game of the 2014 FIFA Soccer World Cup that will be held in Brazil.
Ref.: Joseph E. O’Doherty, Mikhail A. Lebedev, Peter J. Ifft, Katie Z. Zhuang, Solaiman Shokur, Hannes Bleuler, and Miguel A. L. Nicolelis, Active tactile exploration using a brain–machine–brain interface, Nature, October 2011 [doi:10.1038/nature10489]
Source | KurzweilAI
When it happened, emotions flashed like lightning.
The nearby robotic hand that Tim Hemmes was controlling with his mind touched his girlfriend Katie Schaffer’s outstretched hand.
One small touch for Mr. Hemmes; one giant reach for people with disabilities.
Tears of joy flowing in an Oakland laboratory that day continued later when Mr. Hemmes toasted his and University of Pittsburgh researchers’ success at a local restaurant with two daiquiris.
A simple act for most people proved to be a major advance in two decades of research that has proven to be the stuff of science fiction.
Mr. Hemmes’ success in putting the robotic hand in the waiting hand of Ms. Schaffer, 27, of Philadelphia, represented the first time a person with quadriplegia has used his mind to control a robotic arm so masterfully.
The 30-year-old man from Connoquenessing Township, Butler County, hadn’t moved his arms, hands or legs since a motorcycle accident seven years earlier. But Mr. Hemmes had practiced six hours a day, six days a week for nearly a month to move the arm with his mind.
That successful act increases hope for people with paralysis or loss of limbs that they can feed and dress themselves and open doors, among other tasks, with a mind-controlled robotic arm. It’s also improved the prospects of wiring around spinal cord injuries to allow motionless arms and legs to function once again.
“I think the potential here is incredible,” said Dr. Michael Boninger, director of UPMC’s Rehabilitation Institute and a principal investigator in the project. “This is a breakthrough for us.”
Mr. Hemmes? They say he’s a rock star.
In a project led by Andrew Schwartz, Ph.D., a University of Pittsburgh professor of neurobiology, researchers taught a monkey how to use a robotic arm mentally to feed itself marshmallows. Electrodes had been shallowly implanted in its brain to read signals from neurons known to control arm motion.
Electrocorticography or ECoG — in which an electronic grid is surgically placed against the brain without penetration — less intrusively captures brain signals.
ECoG has been used to locate sites of seizures and do other experiments in patients with epilepsy. Those experiments were prelude to seeking a candidate with quadriplegia to test ECoG’s capability to control a robotic arm developed by Johns Hopkins University.
The still unanswered question was whether the brains of people with long-term paralysis still produced signals to move their limbs.
ECoG picks up an array of brain signals, almost like a secret code or new language, that a computer algorithm can interpret and then move a robotic arm based on the person’s intentions. It’s a simple explanation for complex science.
Mr. Hemmes’ name cropped up so many times as a potential candidate that the team called him to gauge his interest.
He said no.
He already was involved in a research in Cleveland and feared this project would interfere. But knowing they had the ideal candidate, they called back. This time he agreed, as long as it would not limit his participation in future phases of research.
Mr. Hemmes became quadriplegic July 11, 2004, apparently after a deer darted onto the roadway, causing him to swerve his motorcycle onto gravel where his shoulder hit a mailbox, sending him flying headfirst into a guardrail. The top of his helmet became impaled on a guardrail I-beam, rendering his head motionless while his body continued flying, snapping his neck at the fourth cervical vertebra.
A passer-by found him with blue lips and no signs of breathing. Mr. Hemmes was flown by rescue helicopter to UPMC Mercy and diagnosed with quadriplegia — a condition in which he had lost use of his limbs and his body below the neck or shoulders. He had to learn how to breathe on his own. His doctor told him it was worst accident he’d ever seen in which the person survived.
But after the process of adapting psychologically to quadriplegia, Mr. Hemmes chose to pursue a full life, especially after he got a device to operate a computer and another to operate a wheelchair with head motions.
Since January, he has operated the website — www.Pittsburghpitbullrescue.com — to rescue homeless pit bulls and find them new owners.
The former hockey player’s competitive spirit and willingness to face risk were key attributes. Elizabeth Tyler-Kabara, the UPMC neurosurgeon who would install the ECoG in Mr. Hemmes’ brain, said he had strong motivation and a vision that paralysis could be cured.
Ever since his accident, Mr. Hemmes said, he’s had the goal of hugging his daughter Jaylei, now 8. This could be the first step.
“It’s an honor that they picked me, and I feel humbled,” Mr. Hemmes said.
Mr. Hemmes underwent several hours of surgery to install the ECoG at a precise location against the brain. Wires running under the skin down to a port near his collarbone — where wires can connect to the robotic arm — caused him a stiff neck for a few days.
Two days after surgery, he began exhaustive training on mentally maneuvering a computer cursor in various directions to reach and make targets disappear. Next he learned to move the cursor diagonally before working for hours to capture targets on a three-dimensional computer.
The U.S. Food and Drug Administration allowed the trial to last only 28 days, when the ECoG is removed. The project, initially funded by UPMC, has received more than $6 million in funding from the Department of Veterans Affairs, the National Institutes of Health, and the U.S. Department of Defense’s Defense Advanced Research Projects Agency, known as DARPA.
Initially Mr. Hemmes tried thinking about flexing his arm to move the cursor. But he had better success visually grabbing the ball-shaped cursor to throw it toward a target on the screen. The “mental eye-grabbing” worked best when he was relaxed.
Soon he was capturing 15 of 16 targets and sometimes all 16 during timed sessions. The next challenge was moving the robotic arm with his mind.
The same mental processes worked, but the arm moved more slowly and in real space. But time was ticking away as the experiment approached its final days last month. With Mr. Hemmes finally moving the arm in all directions, Wei Wang — assistant professor of physical medicine and rehabilitation at Pitt’s School of Medicine who also has worked on the signalling system — stood in front of him and raised his hand.
The robotic arm that Mr. Hemmes was controlling moved with fits and starts but in time reached Dr. Wang’s upheld hand. Mr. Hemmes gave him a high five.
The big moment arrived.
Katie Schaffer stood before her boyfriend with her hand extended. “Baby,” she said encouraging him, “touch my hand.”
It took several minutes, but he raised the robotic hand and pushed it toward Ms. Schaffer until its palm finally touched hers. Tears flowed.
“It’s the first time I’ve reached out to anybody in over seven years,” Mr. Hemmes said. “I wanted to touch Katie. I never got to do that before.”
“I have tattoos, and I’m a big, strong guy,” he said in retrospect. “So if I’m going to cry, I’m going to bawl my eyes out. It was pure emotion.”
Mr. Hemmes said his accomplishments represent a first step toward “a cure for paralysis.” The research team is cautious about such statements without denying the possibility. They prefer identifying the goal of restoring function in people with disabilities.
“This was way beyond what we expected,” Dr. Tyler-Kabara said. “We really hit a home run, and I’m thrilled.”
The next phase will include up to six people tested in another 30-day trial with ECoG. A year-long trial will test the electrode array that shallowly penetrates the brain. Goals during these phases include expanding the degrees of arm motions to allow people to “pick up a grape or grasp and turn a door knob,” Dr. Tyler-Kabara said.
Anyone interested in participating should call 1-800-533-8762.
Mr. Hemmes says he will participate in future research.
“This is something big, but I’m not done yet,” he said. “I want to hug my daughter.”
A new map of the moon has uncovered a trove of areas rich in precious titanium ore, with some lunar rocks harboring 10 times as much of the stuff as rocks here on Earth do.
The map, which combined observations in visible and ultraviolet wavelengths, revealed the valuable titanium deposits. These findings could shed light on some of the mysteries of the lunar interior, and could also lay the groundwork for future mining on the moon, researchers said.
“Looking up at the moon, its surface appears painted with shades of grey — at least to the human eye,” Mark Robinson, of Arizona State University, said in a statement. “The maria appear reddish in some places and blue in others. Although subtle, these color variations tell us important things about the chemistry and evolution of the lunar surface. They indicate the titanium and iron abundance, as well as the maturity of a lunar soil.
The results of the study were presented Friday (Oct. 7) at the joint meeting of the European Planetary Science Congress and the American Astronomical Society’s Division for Planetary Sciences in Nantes, France.
Mapping the lunar surface
The map of the moon’s surface was constructed using data from NASA’s Lunar Reconnaissance Orbiter (LRO), which has been circling the moon since June 2009. The probe’s wide angle camera snapped pictures of the surface in seven different wavelengths at different resolutions.
Since specific minerals strongly reflect or absorb different parts of the electromagnetic spectrum, LRO’s instruments were able to give scientists a clearer picture of the chemical composition of the moon’s surface.
Robinson and his colleagues stitched together a mosaic using roughly 4,000 images that had been collected by the spacecraft over one month.
The researchers scanned the lunar surface and compared the brightness in the range of wavelengths from ultraviolet to visible light, picking out areas that are abundant in titanium. The scientists then cross-referenced their findings with lunar samples that were brought back to Earth from NASA’s Apollo flights and the Russian Luna missions.
These titanium-rich areas on the moon puzzled the researchers. The highest abundance of titanium in similar rocks on Earth hovers around 1 percent or less, the scientists explained. The new map shows that these troves of titanium on the moon range from about 1 percent to a little more than 10 percent.
“We still don’t really understand why we find much higher abundances of titanium on the moon compared to similar types of rocks on Earth,” Robinson said. “What the lunar titanium-richness does tell us is something about the conditions inside the moon shortly after it formed, knowledge that geochemists value for understanding the evolution of the moon.”
Valuable titanium ore
Titanium on the moon is primarily found in the mineral ilmenite, a compound that contains iron, titanium and oxygen. If humans one day mine on the moon, they could break down ilmenite to separate these elements.
Furthermore, Apollo data indicated that titanium-rich minerals are more efficient at retaining solar wind particles, such as helium and hydrogen. These gases would likely be vital resources in the construction of lunar colonies and for exploration of the moon, the researchers said. [Lunar Legacy: 45 Apollo Moon Mission Photos]
“Astronauts will want to visit places with both high scientific value and a high potential for resources that can be used to support exploration activities,” Robinson said. “Areas with high titanium provide both — a pathway to understanding the interior of the moon and potential mining resources.”
The lunar map also shows how space weather changes the surface of the moon. Charged particles from solar wind and micrometeorite impacts can change the moon’s surface materials, pulverizing rock into a fine powder and altering the chemical composition of the lunar surface.
“One of the exciting discoveries we’ve made is that the effects of weathering show up much more quickly in ultraviolet than in visible or infrared wavelengths,” study co-author Brett Denevi, of Johns Hopkins University Applied Physics Laboratory in Laurel, Md., said in a statement. “In the [Lunar Reconnaissance Orbiter Camera] ultraviolet mosaics, even craters that we thought were very young appear relatively mature. Only small, very recently formed craters show up as fresh regolith exposed on the surface.”
Source | SPACE
Mapping real-world motions to “self-animated” virtual avatars, using body tracking to communicate a wide range of gestures, helps people communicate better in virtual worlds like Second Life, says researchers from the Max Planck Institute for Biological Cybernetics and Korea University.
They conducted two experiments to investigate whether head-mounted display virtual reality is useful for researching the influence of body gestures in communication; and whether body gestures are used to help in communicating the meaning of a word. Participants worked in pairs and played a communication game, where one person had to describe the meanings of words to the other.
Ref.: Trevor J. Dodds et al., Talk to the Virtual Hands: Self-Animated Avatars Improve Communication in Head-Mounted Display Virtual Environments, PLoS One, DOI: 10.1371/journal.pone.0025759 (free access)
Source | KurzweilAI