ºÎ»ê½Ãû µµ¼­¿ä¾à
   ±Û·Î¹ú Æ®·»µå³»¼­Àç´ã±â 

åǥÁö






  • Training Service Robots for the ¡°Mainstream¡±

    For decades people have envisioned service robots in health care and the home, yet the closest we¡¯ve come are Level III self-driving cars, robot vacuums, an occasional robot bartender, and some pilot projects in hospitals.

    The biggest problem is the complexity of real-world service environments and the nuances of training a robot to deal with real-world service scenarios. Fortunately, researchers are making steady progress toward creating robots that can be cost-effectively trained to perform real-world service tasks.

    What is happening on the leading-edge of robotics and what does this portend for commercialization in the coming years? Consider some key examples.

    Imagine if robots could learn from simply watching demonstrations. That would enable us to show a home-care robot how to do routine chores like setting a dinner table and then just let it start doing it. In the workplace, people could train robots the way they train new employees, showing them how to perform a specific set of duties and then letting them do them. And on the road, a self-driving car could learn how to drive safely by simply watching a human drive around the neighborhood. In short, robots would learn the way people do.

    Fortunately, we¡¯re making significant progress toward realizing that vision. Researchers at USC have designed a system that lets robots autonomously learn complicated tasks from a very small number of demonstrations which might be ¡°imperfect.¡± The results of their study titled Learning from Demonstrations Using Signal Temporal Logic was presented at the Conference on Robot Learning in November 2020.

    The system the USC researchers have created works by evaluating the quality of each demonstration, enabling the system to learn from the mistakes it sees, as well as the successes. While current state-of-art methods need at least 100 demonstrations to nail a specific task, the new method allows robots to learn from only a handful of demonstrations. It also allows robots to learn more intuitively, the way humans learn from each other. That means watching as someone executes a task, even imperfectly, then trying it. As we all know, demonstration don¡¯t have to be ¡°perfect¡± for humans to glean knowledge from watching each other perform them.

    Few people have the programming knowledge needed to explicitly specify what the robot needs to do in every situation. Furthermore, humans cannot possibly demonstrate everything that a robot needs to know or anticipate what happens when the robot encounters something it hasn¡¯t seen before. Therefore, learning from demonstrations is important for widely deploying robots in the home or workplace.

    The USC researchers addressed the common hurdles to robot training by integrating ¡°signal temporal logic¡± (or STL) which evaluates the quality of demonstrations and automatically ranks them to create inherent rewards. Therefore, even if some parts of the demonstrations do not make any sense based on the logic requirements, the robot can still learn from the imperfect parts. In a way, the system is coming to its own conclusions about the accuracy or success of a demonstration.

    Let¡¯s say a robot learns from different types of demonstrations such as hands-on demonstrations, videos, or simulations; if the ¡°robot¡¯s teacher¡± does something that is very unsafe, standard approaches will do one of two things: either, the robot will completely disregard it, or it will learn the wrong thing.

    In contrast, STL uses ¡°common sense reasoning¡± to understand which parts of a demonstration are good and which parts are not. In essence, this is exactly what humans also do. Take, for example, a driving demonstration where someone ignores a stop sign. This would be ranked lower by the system than a demonstration by a good driver. But, if during this demonstration, the driver does something intelligent - for instance, applies their brakes to avoid a crash - the robot will still learn from that smart action.

    STL is an expressive mathematical symbolic language that enables robotic reasoning about current and future outcomes. When we go into the world of cyber-physical systems, like robots and self-driving cars, where time is crucial, STL allows reasoning about physical signals. And the researchers were surprised by the extent of the system¡¯s success.

    Going forward, the USC researchers will continue working to integrate this approach into robotic systems to help them efficiently learn from demonstrations and experience.

    The question several Johns Hopkins University researchers asked was, ¡°How do we get the robot to learn a skill?¡±

    Their answer was to training robots with positive reinforcement. This approach is familiar to anyone who has used treats to change a dog¡¯s behavior. In this case, the team dramatically improved a robot¡¯s skills and did it quickly enough to make training robots for real-world applications more realistic.

    Unlike humans and animals which are born with highly intuitive brains, computers are blank slates and must learn everything from scratch. But in both cases, true learning is often accomplished with trial-and-error, and roboticists are still figuring out how robots can learn efficiently from their mistakes.

    The Johns Hopkins team accomplished that by devising a reward system which works for a robot the same way treats work for a dog. Whereas a dog might get a cookie for a job well done, the robot earns numeric points.

    For instance, when the researchers wanted to teach a robot named Spot to stack blocks, the robot needed to learn how to focus on ¡°constructive¡± actions. As the robot explored the blocks, it quickly learned that correct behaviors related to stacking earned high points, but incorrect ones earned nothing. For example, ¡°Reach out but don¡¯t grasp a block?¡± earned no points. And, ¡°Knock over a stack?¡± definitely earned no points. On the other hand, Spot earned the most points by placing the last block on top of a four-block stack.

    This training tactic not only worked, it took just days to teach the robot what used to take weeks. The team was able to reduce the practice time by first training a simulated robot, which is a lot like a video game, then running tests with Spot.

    The robot is programmed to ¡°want¡± the higher score. So, it quickly learns the right behavior to get the best reward. Using other approaches, it took Spot a month of practice to achieve 100% accuracy with the blocks, but it took only two days with positive reinforcement.

    Positive reinforcement not only worked to help the robot teach itself to stack blocks, with the point system the robot just as quickly learned several other tasks - including how to play a simulated navigation game. The important take away is that the ability to learn from mistakes in all types of situations is critical for designing a robot that could adapt to new environments.

    The team anticipates that these findings could help train household robots to do laundry and wash dishes - tasks that could be commercially useful and help seniors live independently. It could also help improve self-driving cars.

    At Johns Hopkins, the goal is to eventually develop robots that can do complex tasks in the real world - like product assembly, caring for the elderly and even surgery. Today, people don¡¯t know how to program tasks like those - the real world is just too complex. But work like this shows us that there are ways that robots can learn how to accomplish such real-world tasks in a safe and efficient way.

    For instance, a robot that can cook has been an aspiration of science fiction writers, futurists, and scientists for decades. As artificial intelligence techniques have advanced, companies have built prototype robot chefs, although none of these are commercially available today, largely because they lack the skill level of their human counterparts.

    Teaching a robot to prepare and cook food is a challenging task, since it must deal with complex problems in robotic manipulation, computer vision, sensing and human-robot interaction in order to produce a consistent end-product.

    In addition, tastes differ from person-to-person. And since taste is not universal, universal solutions don¡¯t exist. Other research groups have trained robots to make cookies, pancakes and even pizza, but these robot chefs have not been optimized for the many subjective variables involved in cooking.

    Egg dishes, omelets in particular, have long been considered a test of culinary skill. An omelet is one of those dishes that is easy to make, but difficult to make well. Therefore, a research team at England¡¯s Cambridge University thought it would be an ideal test to optimize an omelet-making robot chef for taste, texture, aroma and appearance.

    In collaboration with domestic appliance company Beko, the Cambridge team trained their robot chef to prepare an omelet by performing every step from cracking the eggs through to ¡°plating¡± the finished dish. The work was performed in Cambridge¡¯s Department of Engineering, using a test kitchen supplied by Beko plc and Symphony Group.

    The machine learning technique they developed makes use of a statistical tool called Bayesian Inference. In order to avoid over-stuffing, the human tasters with omelets, this technique squeezed as much information as possible out of a limited number of data samples.

    Another challenge they faced was the subjectivity of the human sense of taste. Humans aren¡¯t very good at giving absolute assessments when it comes to food and they generally need to give relative ones when it comes to taste. So, the team needed to tweak a machine learning algorithm, called the batch algorithm, so that human tasters could give feedback based on comparative evaluations, rather than absolute ones.

    So, how did the robot measure up as a chef? The omelets it produced at the end of the training process generally tasted ¡°great;¡± much better than the researchers expected!

    The results demonstrated that machine learning can be used to produce quantifiable improvements in a robot¡¯s food preparation skills. Furthermore, such an approach can be easily extended to multiple robotic chefs.

    The results of this research were reported in the journal IEEE Robotics and Automation Letters and presented at the IEEE International Conference on Robotics and Automation.

    Going forward, further studies will be conducted to discover other optimization techniques and their viability. Beko, the domestic appliance company involved in the research is passionate about designing the kitchen of the future and believes robotics applications such as this will play a crucial part.

    Obviously, domestic and commercial robots aimed at mass markets are nice and they will generate huge revenues later in the 21st century. However, the two industries that have the resources and long-term perspective to commercialize sophisticated service robots are healthcare and defense. At this point, defense is doing the heavy lifting in terms of sensors, batteries, motors and raw computing power. On the other hand, healthcare is providing the use cases and the funding needed to develop cutting-edge training techniques for robots.

    Today, there are as many different kinds of robots used in health care settings as there are tasks for them to perform. But those robots are mostly part of ¡°pilot programs.¡± Already there are robotic exoskeletons that help staff lift patients safely and there are delivery robots that zip around hospital hallways like motorized room service carts. Meanwhile, Doll like therapy robots comfort and calm patients agitated by the disorienting symptoms of dementia. And human pharmacists work alongside robotic dispensing systems when filling prescriptions.

    Increasingly, advanced training and sensing techniques are preparing robots to take over tasks that drain human caregivers. For instance, consider the role robots could play in helping people to eat. According to census data from 2010, about 1 million adults in the United States needed someone to help them eat. By 2030, that number is expected to be dramatically higher. Being dependent on a caregiver to feed them every bite every day takes away a person¡¯s sense of independence. So, there is a need is to give people more control over their lives.

    To address this need, researchers at the University of Washington are working on an autonomous feeding system that would be attached to people¡¯s wheelchairs and feed them whatever they wanted to eat. The idea involves a robotic system that can identify different foods on a plate and use a fork to pick up and deliver the desired bites to a person¡¯s mouth.

    The UW team published its results in the journal IEEE Robotics and Automation Letters and presented it at the ACM/IEEE International Conference on Human-Robot Interaction.

    When they started the project, the researchers realized: There are so many ways that people can eat a piece of food depending on its size, shape or consistency that it¡¯s hard to know where to start. The solution was to set up an experiment to see how humans eat common foods like grapes and carrots.

    The researchers arranged plates with about a dozen different kinds of food, ranging in consistency from hard carrots to soft bananas. The plates also included foods which have a tough skin and soft insides, like tomatoes and grapes. Then the team gave volunteers a fork and asked them to pick up different pieces of food and feed them to a mannequin. The fork contained a sensor to measure how much force people used when they picked up food.

    The volunteers used various strategies to pick up food with different consistencies. For example, people skewered soft items like bananas at an angle to keep them from slipping off the fork. For items like carrots and grapes, the volunteers tended to use wiggling motions to increase the force and spear each bite.

    To design a skewering and feeding strategy that changes based on the food item, the researchers combined two different algorithms. First, they used an object-detection algorithm called RetinaNet, which scans the plate, identifies the types of food on it and places a frame around each item.

    Then they developed SPNet, an algorithm that examines the type of food in a specific frame and tells the robot the best way to pick up the food. For example, SPNet tells the robot to skewer a strawberry or a slice of banana in the middle, and spear carrots at one of the two ends.

    The team had the robot pick up pieces of food and feed them to volunteers using SPNet or a more uniform strategy: an approach that skewered the center of each food item regardless of what it was. SPNet¡¯s varying strategies outperformed or performed the same as the uniform approach for all the food.

    The team is currently getting feedback from caregivers and patients in assisted living facilities on how to improve the system to match people¡¯s needs.

    Ultimately the goal is for the robot to help people have their lunch or dinner on their own. But the point is not to replace caregivers; the goal is to empower them. With a robot to help, the caregiver can set up the plate, and then do something else while the person eats.

    Another health care application requiring training is helping people get dressed. It¡¯s a big market: more than 1 million Americans require daily physical assistance to get dressed because of injury, disease and advanced age. Robots could potentially help. But dealing with cloth and the human body is complex for robots.

    To address this need, a robot at the Georgia Institute of Technology is successfully sliding hospital gowns on people¡¯s arms.

    The machine, called PR2, taught itself in one day by analyzing nearly 11,000 simulated examples of a robot putting a gown onto a human arm.

    People learn new skills using trial-and-error. So, the Georgia Tech researchers gave PR2 the same opportunity. Doing thousands of trials on a human would have been dangerous as well as impossibly tedious. But in just one day, using simulations, the robot learned what a person may physically feel while getting dressed.

    The robot also learned to predict the consequences of moving the gown in different ways. Some motions made the gown taut, pulling hard against the person¡¯s body. Other movements slide the gown smoothly along the person¡¯s arm. The robot uses these predictions to select motions that comfortably dress the arm.

    After success in simulation, the PR2 attempted to dress people. Participants sat in front of the robot and watched as it held a gown and slid it onto their arms. Rather than vision, the robot used its sense of touch to perform the task based on what it learned about forces during the simulations.

    The key is that the robot is always thinking ahead. It asks itself, ¡®if I pull the gown this way, will it cause more or less force on the person¡¯s arm? What would happen if I go that way instead?'¡±

    The researchers varied the robot¡¯s timing and allowed it to think as much as a fifth of a second into the future while strategizing about its next move. Less than that caused the robot to fail more often.

    By predicting the physical implications of their actions, robots can provide assistance that is safer, more comfortable and more effective.

    Unlike industrial robots, service robots are often targeted at tasks where they have to respond to people¡¯s emotions and anticipate their reactions to situations. That¡¯s why training robots to read and emulate human emotions has become a particularly important area of development. And it¡¯s a specialized area of research that¡¯s already delivering results.

    Time Magazine profiled Stevie a ¡°socially assistive robot¡± designed to help users by engaging with them socially as well as physically. The 4-foot, 7-inch robot is equipped with autonomous navigation. While it can roll through hallways unassisted, Stevie never leaves his room without a handler. He has voice activated controls similar to Amazon¡¯s Alexa and responds to words with speech, gestures, and head movements. For instance, if you tell the robot, you¡¯re sick it will it slump forward with a sorrowful frown on its LED-screen face and say, ¡°I¡¯m sorry to hear that.¡± If you pay Stevie a compliment, the screen reverts to a smile. When at rest, its head tilts gently and its digital brown eyes blink, patiently waiting for the next command.

    A ¡°socially assistive robot¡± such as Stevie can be useful in assisted living and nursing homes in a number of ways. Stevie could go door-to-door taking meal orders on the touchscreen attachment that can be mounted to its body. And since the robot can recognize voice commands such as ¡°help me,¡± it could alert staff to a resident in distress.

    But generally speaking, the residents want the robot to stay and interact with them. They want the robot to keep them company. In short, they want a ¡°robot friend.¡± Therefore, while the final version of the machine will still be able to make deliveries, it¡¯s primary role is likely to be more social and enjoyable. The developers have found that the enjoyable things are probably more important to ¡°get right¡± in the short term, because those are the things that seem to affect people¡¯s quality of life.

    That¡¯s an important finding, but it¡¯s not something new.

    Research into social robots has shown that machines which respond to emotion can help the most vulnerable people including the elderly and children. And they could lead to robots becoming more widely socially accepted.

    Robots that help care for others are often at the cutting edge of emotional interaction. RoboKind created a robot named Milo to help children with autism spectrum disorders learn more about emotional expression and empathy while collecting data on their progress. Milo is both a robotic teacher and a student. His friendly face makes him approachable, making it possible for children to analyze his expressions without feeling social anxiety.

    Another situation where robots can reduce stress is in hospital settings. To address this problem, Expper Tech¡¯s ¡®Robin¡¯ robot was designed as a companion to provide emotional support for children undergoing medical treatment. Robin explains medical procedures to them, plays games and tells stories, and during treatment he distracts them to reduce their perception of pain.

    Expper¡¯s robot uses AI to create empathy, remembering facial expressions and conversations to build dialogue for follow-up sessions.

    In trials, the team found that Robin led to a 34% decrease in stress and a 26% increase in happiness among the 120 children who interacted with him at least once.

    Healthcare robots could all benefit from displaying emotional intelligence, both recognizing and responding to human emotions, and to some extent, managing them. A problem with this level of sophistication is the fear that human jobs may be lost as robots become more adept at handling social situations.

    However, population trends suggest that the de mand for robots to work alongside people in care situations will grow over time. By 2050, the number of people aged 65 and over globally will be 1.6 billion, representing roughly twice the proportion of the population it does today. An extra 3.5 million care workers will be needed and that should make room for lots of emotionally intelligent robots.

    Already relatively simple systems are being trained to meet some of that demand. This includes ProxEmo, a little wheeled robot that can guess how you are feeling from the way you walk. Another example is ENRICHME - an ¡®ambient assisted living¡¯ robot from the University of Lincoln in the UK; ENRICHME helps older people to stay physically and mentally active.

    Going forward, full-spectrum emotional AI will be needed to cope with the complexity of true human interaction, and that is the goal of organizations such as Affectiva . So far, the company has trained its algorithms to detect seven emotions: anger, contempt, disgust, fear, surprise, sadness, and joy. To do this training, it used more than nine million faces from countries around the world. Affectiva expects that in order to respond to our needs in a more under-standing way, devices will contain emotion chips as standard equipment. Obviously, technology companies could potentially use these advances to manipulate our emotions, but this seems like a small risk, compared to the benefits.

    To date, the impact of social robots on our lives has been tiny. And service robot manufacturers come and go without robots becoming a fixture in people¡¯s lives. But newer and more sophisticated models are being introduced that should soon produce a big breakthrough. Human emotions are difficult to define, but as trust in robots increases, cracking the psychological barrier becomes easier to imagine.

    What¡¯s the bottom line?

    Training service robots to deal with real people in real places is increasingly the focus of research around the world. And a combination of innovative thinking, sophisticated software, advanced sensors and raw computing power promises to make capabilities that were once impossible, highly cost-effective. Those are the hallmarks of an industry approaching tipping-point.

    Given this trend, we offer the following forecasts for your consideration.

    First, service robots will only be accepted when they can produce real results cost-effectively, in a real-world environment.

    Various versions of Roomba and their floor cleaning competitors prove this point. Existing sensors and software make it possible for ordinary people to train them to navigate a home, avoiding problems, and cleaning the floors. Other categories of service robots have not been able to overcome this training hurdle.

    Second, the first large-scale adoption of service robots will be in long-term care facilities. As Harvard¡¯s Clayton Christensen emphasized, disruptive technologies first penetrate markets that are not well-served by existing offerings. Healthy people can quickly and cheaply do many household tasks themselves. Wealthy people with health issues can easily hire humans to perform the tasks they can¡¯t or won¡¯t do themselves. However, people in long-term care facilities would typically like more social interaction and personal service than the staff can readily give them. Whether it¡¯s being fed, dressed, or taken from place-to-place, a swarm of special-purpose robots is a 24/7 solution.

    Third, as we¡¯re seeing in today¡¯s health care trials, the optimal service robot will take the form of a swarm of s p e c i a l - purpose robots, rather than one all-purpose robot.
     
    In the home, that swarm will likely fill the gaps between existing state-of-theart appliances. For example, dishwashers, refrigerators, microwave ovens, televisions, alarm systems, lighting systems, smart faucets, health care monitors and smart assistants like Alexa, will all become more integrated, powerful and function-rich. The consumables subsystem (for food, paper goods, and personal care items) is likely to use RF-ID technology to replenish, prepare and dispose of items. A telemedicine system could involve wearable sensors and an AI-based monitoring system. And,

    Fourth, advanced general-purpose household robots like the Jetson¡¯s Rosie will be technologically feasible by 2030, but will not be cost-effective before 2040. Infantry robots are getting close to having the dexterity, stability and durability needed to serve this purpose. But, like an F-35 fighter, they are very expensive to build and maintain. Fortunately, just like most defense systems, its core technologies will become inexpensive as they mature.

    * * 

    References List :
    1. Conference on Robot Learning (CoRL) 2020. February 15, 2021. Aniruddh G. Puranic, Jyotirmoy V. Deshmukh & Stefanos Nikolaidis. Learning from Demonstrations using Signal Temporal Logic.
    https://arxiv.org/pdf/2102.07730.pdf

    2. IEEE Robotics and Automation Letters (2020). Kai Junge et al. Improving Robotic Cooking using Batch Bayesian Optimization.

    3. IEEE Robotics and Automation Letters, 2020. Andrew Hundt, Benjamin Killeen, Nicholas Greene, Hongtao Wu, Heeyeon Kwon, Chris Paxton & Gregory D. Hager. ¡°Good Robot!¡±: Efficient Reinforcement Learning for Multi-Step Visual Tasks with Sim to Real Transfer.

    4. IEEE Robotics and Automation Letters, 2019. Tapomayukh Bhattacharjee, Gilwoo Lee, Hanjun Song & Siddhartha S. Srinivasa. Towards Robotic Feeding: Role of Haptics in Fork-Based Food Manipulation.

    5. TechXplore.com. January 11, 2018. New ¡¯emotional¡¯ robots aim to read human feelings.

    6. Time Magazine. October 4, 2019. Corinne Purtill. Stop Me if You¡¯ve Heard This One: A Robot and a Team of Irish Scientists Walk Into a Senior Living Home.

    7. IEEE Robotics and Automation Letters, 2020. Kai Junge, Josie Hughes, Thomas George Thuruthel & Fumiya Iida. I mproving Robotic Cooking Using Batch Bayesian Optimization.

    8. Reuters.com. August 3, 2017. Mark Miller. The Future of U. S. Caregiving: High Demand, Scarce Workers.