The boy, a dark-haired \n6-year-old, is playing with a new companion. \nThe two hit it off quickly — unusual for the 6-year-old, who has autism — and the boy is imitating his playmate’s every move, now nodding his head, now raising his arms. \n“Like Simon Says,” says the autistic boy’s mother, seated next to him on the floor. \nYet soon he begins to withdraw; in a video of the session, he covers his ears and slumps against the wall. \nBut the companion, a 91cm-tall robot being tested at the University of Southern California, maintains eye contact and performs another move, raising one arm up high. \nUp goes the boy’s arm — and now he is smiling at the machine. \nIn a handful of laboratories around the world, computer scientists are developing robots like this one: highly programmed machines that can engage people and teach them simple skills, including household tasks, vocabulary or, as in the case of the boy, playing, elementary imitation and taking turns. \nSo far, the teaching has been very basic, delivered mostly in experimental settings, and the robots are still works in progress, a hackers’ gallery of moving parts that, like mechanical savants, each do some things well at the expense of others. \nYet the most advanced models are fully autonomous, guided by artificial intelligence software like motion tracking and speech recognition, which can make them just engaging enough to rival humans at some teaching tasks. \nResearchers say the pace of innovation is such that these machines should begin to learn as they teach, becoming the sort of infinitely patient, highly informed instructors that would be effective in subjects like foreign language or in repetitive therapies used to treat developmental problems like autism. \nLESSONS FROM RUBI \n“Kenka,” says a childlike voice. “Ken-ka.” \nStanding on a polka-dot carpet at a preschool on the campus of the University of California, San Diego, a robot named RUBI is teaching Finnish to a 3-year-old boy. \nRUBI looks like a desktop computer come to life: its screen-torso, mounted on a pair of shoes, sprouts mechanical arms and a lunchbox-sized head, fitted with video cameras, a microphone and voice capability. RUBI wears a bandanna around its neck and a fixed happy-face smile, below a pair of large, plastic eyes. \nIt picks up a white sneaker and says kenka, the Finnish word for shoe, before returning it to the floor. “Feel it; I’m a kenka.” \nIn a video of this exchange, the boy picks up the sneaker, says “kenka, kenka” — and holds up the shoe for the robot to see. \nIn the San Diego classroom where RUBI has taught Finnish, researchers are finding that the robot enables preschool children to score significantly better on tests, compared with less interactive learning, as from tapes. \nResearchers in social robotics — a branch of computer science devoted to enhancing communication between humans and machines — at Honda Labs in Mountain View, California, have found a similar result with their robot, a 91cm-tall character called Asimo, which looks like a miniature astronaut. In one 20-minute session the machine taught grade-school students how to set a table — improving their accuracy by about 25 percent, a recent study found. \nMAKING THE CONNECTION \nIn a lab at the University of Washington, Morphy, a pint-sized robot, catches the eye of an infant girl and turns to look at a toy. \nNo luck; the girl does not follow its gaze, as she would a human’s. \nIn a video the researchers made of the experiment, the girl next sees the robot “waving” to an adult. Now she’s interested; the sight of the machine interacting registers it as a social being in the young brain. She begins to track what the robot is looking at, to the right, the left, down. The machine has elicited what scientists call gaze-following, an essential first step of social exchange. \n“Before they have language, infants pay attention to what I call informational hot spots,” where their mother or father is looking, said Andrew Meltzoff, a psychologist who is co-director of university’s Institute for Learning and Brain Sciences. This, he said, is how learning begins. \nThis basic finding, to be published later this year, is one of dozens from a field called affective computing that is helping scientists discover exactly which features of a robot make it most convincingly “real” as a social partner, a helper, a teacher. \nThe San Diego researchers found that if RUBI reacted to a child’s expression or comment too fast, it threw off the interaction; the same happened if the response was too slow. But if the robot reacted within about a second and a half, child and machine were smoothly in sync. \nOne way to begin this process is to have a child mimic the physical movements of a robot and vice versa. In a continuing study financed by the National Institutes of Health, scientists at the University of Connecticut are conducting therapy sessions for children with autism using a French robot called Nao, a 61cm-tall humanoid that looks like an elegant Transformer toy. The robot, remotely controlled by a therapist, demonstrates martial arts kicks and chops and urges the child to follow suit; then it encourages the child to lead. \nThis simple mimicry seems to build a kind of trust, and increase sociability, said Anjana Bhat, an assistant professor in the department of education who is directing the experiment. \nLEARNING FROM HUMANS \nOn a recent Monday afternoon, Crystal Chao, a graduate student in robotics at the Georgia Institute of Technology, was teaching a 1.5m-tall robot named Simon to put away toys. She had given some instructions — the flower goes in the red bin, the block in the blue bin — and Simon had correctly put away several of these objects. But now the robot was stumped, its doughboy head tipped forward, its fawn eyes blinking at a green toy water sprinkler. \nChao repeated her query. \n“Let me see,” said Simon, in a childlike machine voice, reaching to pick up the sprinkler. “Can you tell me where this goes?” \n“In the green bin,” came the answer. \nSimon nodded, dropping it in that bin. \n“Makes sense,” the robot said. \nIn addition to tracking \nmotion and recognizing language, Simon accumulates knowledge through experience. \nJust as humans can learn from machines, machines can learn from humans, said Andrea Thomaz, an assistant professor of interactive computing at Georgia Tech who directs the project. \nThis ability to monitor and learn from experience is the next great frontier for social robotics — and it probably depends, in large part, on unraveling the secrets of how the human brain accumulates information during infancy. \nIn San Diego, researchers are trying to develop a human-looking robot with sensors that approximate the complexity of a year-old infant’s abilities to feel, see and hear. Babies learn, seemingly effortlessly, by experimenting, by mimicking, by moving their limbs. Could a machine with sufficient artificial intelligence do the same? And what kind of learning systems would be sufficient? \nThe research group has bought a US$70,000 robot, built by a Japanese company, that is controlled by a pneumatic pressure system that will act as its senses, in effect helping it map out the environment by “feeling” in addition to “seeing” with embedded cameras. And that is the easy part. \nThe researchers are shooting for nothing less than capturing the foundation of human learning — or, at least, its artificial intelligence equivalent. If robots can learn to learn, on their own and without instruction, they can in principle make the kind of teachers that are responsive to the needs of a class, even an individual child.
Until this summer, when the idea of hiking the length of the island first occurred to me, I didn’t even know that Cijin (旗津) had been a peninsula until 1967. That’s when diggers and dredgers severed Cijin from Taiwan’s “mainland,” because the authorities wished to create a southern entrance to Kaohsiung’s fast expanding port. The island is just under 9km long, but a bit of research quickly convinced me that a south-to-north trek wasn’t a good idea. The southern third of Cijin is dominated by container-lifting cranes, warehouses and other facilities off-limits to the public. Dunhe Street (敦和街) forms the boundary between
As if the climbs and views and snacks and companions of cycling in Taiwan aren’t sufficient, the GPS-generation of route-planners are now using apps such as Strava and Endomondo to create works of art as they ride. One such is nicknamed the Dove Road of Sijhih (汐鴿路), a 25km ride that follows the riverside bike path from the Nangang-Neihu Bridge (南湖橋) to New Taipei City’s Sijhih District (汐止), climbs around 400m up the Sijhih-Shiding Road (汐碇路), before dropping back down past Academia Sinica to generate a very dove-like pattern. Originally called Kippanas by indigenous Ketagalan people and transliterated into Hoklo (more commonly
Community-supported agriculture (CSA) is a way urban households can obtain healthy produce, while helping to build a more sustainable farming sector in Taiwan. King Hsin-i’s (金欣儀) transformation from advertising copywriter to social entrepreneur began in 2008, when she visited a rice farmer who practiced pesticide-free agriculture. “He explained that we have to leave space for other species. At the same time, I realized that while big companies have budgets to spread their messages, farmers have few chances to tell the public about their beautiful concepts,” she recalls. Inspired, she quit her job and traveled throughout rural Taiwan for a year. King went
If ever there was a reason to be inside on Mid-Autumn Festival, even for just an hour or so, while still celebrating the natural world, Cheng Tsung-lung (鄭宗龍) has provided one with his first full-length work for Cloud Gate Dance Theatre (雲門舞集) as artistic director, Sounding Light (定光). Judging by the excerpt performed for the press last week, Cheng shows he can be just as minimalistic as his mentor, troupe founder Lin Hwai-min (林懷民), while still forging his own unique path. Just as he did with last year’s Lunar Halo (毛月亮), his final work as director of Cloud Gate 2 (雲門2), Cheng