<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Convergent Science Network &#187; human-robot interaction</title>
	<atom:link href="https://csnblog.specs-lab.com/tag/human-robot-interaction/feed/" rel="self" type="application/rss+xml" />
	<link>https://csnblog.specs-lab.com</link>
	<description>Blog on Biomimetics and Neurotechnology.     With [writers] Michael Szollosy, Dmitry Malkov, Michelle Wilson, and Anna Mura [editor]</description>
	<lastBuildDate>Tue, 27 Sep 2022 14:58:43 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>What You Say Is What You Did</title>
		<link>https://csnblog.specs-lab.com/2014/07/07/what-you-say-is-what-you-did/</link>
		<comments>https://csnblog.specs-lab.com/2014/07/07/what-you-say-is-what-you-did/#comments</comments>
		<pubDate>Mon, 07 Jul 2014 12:03:16 +0000</pubDate>
		<dc:creator><![CDATA[Dmitry Malkov]]></dc:creator>
				<category><![CDATA[Biomimetics]]></category>
		<category><![CDATA[Cognitive Sciences]]></category>
		<category><![CDATA[Computer Science]]></category>
		<category><![CDATA[Robots and Research]]></category>
		<category><![CDATA[Robots and Society]]></category>
		<category><![CDATA[Robots and the Environment]]></category>
		<category><![CDATA[Robots, Brain, Mind and Behaviour]]></category>
		<category><![CDATA[EFAA]]></category>
		<category><![CDATA[human-robot interaction]]></category>
		<category><![CDATA[icub]]></category>
		<category><![CDATA[Italian Institute of Technology]]></category>
		<category><![CDATA[Pompeu Fabra University]]></category>
		<category><![CDATA[SPECS]]></category>
		<category><![CDATA[What You Say Is What You Did]]></category>
		<category><![CDATA[WYSIWYD]]></category>

		<guid isPermaLink="false">http://csnblog.specs-lab.com/?p=5351</guid>
		<description><![CDATA[A new European project hopes to make robots more trustworthy Year by year, robots become better and better at negotiating each time more complex social interactions with humans. However, much as their social intelligence has improved, these interactions still suffer &#8230; <a href="https://csnblog.specs-lab.com/2014/07/07/what-you-say-is-what-you-did/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<h2><strong>A new European project hopes to make robots more trustworthy</strong></h2>
<p><a href="http://csnblog.specs-lab.com/wp-content/uploads/2014/07/Home_Slide3.jpg" rel="attachment wp-att-5357"><img class="aligncenter size-full wp-image-5357" src="http://csnblog.specs-lab.com/wp-content/uploads/2014/07/Home_Slide3.jpg" alt="Home_Slide3" width="1000" height="500" /></a></p>
<p>Year by year, robots become better and better at negotiating each time more complex social interactions with humans. However, much as their social intelligence has improved, these interactions still suffer from a lack of transparency. In other words, unlike humans, robots are not capable of understanding and explaining their actions in intentional terms, which prevents them from having more effective communication with humans. To the joy of robots and humans alike, this challenge is now addressed by the <a href="http://wysiwyd.upf.edu/">What You Say Is What You Did (WYSIWYD) project</a>, launched earlier this year.</p>
<p><span id="more-5351"></span></p>
<p>The project, coordinated by the <a href="http://specs.upf.edu/">SPECS lab</a> at<a href="http://www.upf.edu/en/"> Pompeu Fabra University</a> in Barcelona, will develop an autobiographical memory that can store data streams obtained by the robot in the form of a consistent personal narrative of the interaction history. Furthermore, the researchers intend to devise a mechanism of conversion of this memory data into meaningful linguistic structures that can be subsequently expressed in speech and communicative actions through a specific channel dubbed WYSIWYD Robotese, thus improving mutual understanding between robots and humans.</p>
<p>WYSIWYD is an interdisciplinary effort that will draw from the fields of robotics, cognitive science, psychology and computational neuroscience. The project largely builds on the previous success of the <a href="http://efaa.upf.edu/">efAA projec</a>t, also coordinated by SPECS. WYSIWYD is scheduled to run for 3 years, and hopefully will bring about a qualitative change in human robot interaction and cooperation as well as unlock new application areas in robotics.</p>
<p>The main research platform for the project is everybody’s favourite <a href="http://www.icub.org/">iCub</a> robot, developed by the <a href="http://www.iit.it/">Italian Institute of Technology </a>in Milan, which is also one of the universities participating in the collaboration. iCub will be used in combination with another amazing piece of technology <a href="http://www.reactable.com/products/live/">Reactable</a>, an interactive table interface.</p>
<p>iCub has recently celebrated its 10<sup>th</sup> anniversary. Watch the video below to see how the robot and its capabilities evolved throughout a decade.</p>
<div style="width: 584px; max-width: 100%;" class="wp-video"><video class="wp-video-shortcode" id="video-5351-2" width="584" height="329" preload="metadata" controls="controls"><source type="video/mp4" src="http://www.iit.it/images/images/icub-facility/videos/icub_bday_noaudio.mp4?_=2" /><a href="http://www.iit.it/images/images/icub-facility/videos/icub_bday_noaudio.mp4">http://www.iit.it/images/images/icub-facility/videos/icub_bday_noaudio.mp4</a></video></div>
]]></content:encoded>
			<wfw:commentRss>https://csnblog.specs-lab.com/2014/07/07/what-you-say-is-what-you-did/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="http://www.iit.it/images/images/icub-facility/videos/icub_bday_noaudio.mp4" length="62801887" type="video/mp4" />
		</item>
		<item>
		<title>What robotics learned from Pixar</title>
		<link>https://csnblog.specs-lab.com/2014/03/24/what-robotics-learned-from-pixar/</link>
		<comments>https://csnblog.specs-lab.com/2014/03/24/what-robotics-learned-from-pixar/#comments</comments>
		<pubDate>Mon, 24 Mar 2014 16:08:15 +0000</pubDate>
		<dc:creator><![CDATA[Dmitry Malkov]]></dc:creator>
				<category><![CDATA[Robots and Research]]></category>
		<category><![CDATA[Robots and Society]]></category>
		<category><![CDATA[Robots, Brain, Mind and Behaviour]]></category>
		<category><![CDATA[AUR]]></category>
		<category><![CDATA[Guy Hoffman]]></category>
		<category><![CDATA[human-robot interaction]]></category>
		<category><![CDATA[Pixar]]></category>
		<category><![CDATA[Robots and emotions]]></category>
		<category><![CDATA[Shimon]]></category>
		<category><![CDATA[Travis]]></category>

		<guid isPermaLink="false">http://csnblog.specs-lab.com/?p=5095</guid>
		<description><![CDATA[Each year brings us closer to the day when robotic companions will become an integral part of our homes, schools, hospitals and offices. However, for robots to be truly accepted in our personal space, their social interactions with us must &#8230; <a href="https://csnblog.specs-lab.com/2014/03/24/what-robotics-learned-from-pixar/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><iframe width="584" height="329" src="http://www.youtube.com/embed/-dT6meyruxQ?feature=oembed" frameborder="0" allowfullscreen></iframe></p>
<p>Each year brings us closer to the day when robotic companions will become an integral part of our homes, schools, hospitals and offices. However, for robots to be truly accepted in our personal space, their social interactions with us must acquire the kind of fluency and coordination that humans expect from each other. This is one of the challenges addressed by <a href="http://guyhoffman.com/">Guy Hoffman</a>, the co-director of the <a href="http://milab.idc.ac.il/">Media Innovation Lab</a> at <a href="http://portal.idc.ac.il/en/main/homepage/pages/homepage.aspx">IDC Herzilya </a>in Israel and possibly one of the most original thinkers in robotics today.</p>
<p><span id="more-5095"></span></p>
<p>Collaborative fluency implies a coordinated and synchronised meshing of joint activities between several participants. Among the most significant parameters that affect the level of fluency and coordination are the anticipation and timing of robotic movements. The problem with this is that the majority of existing robots are designed and programmed in such a way that requires them to first analyse human movements, calculate the appropriate response and only then act accordingly, all of which delay the robot’s movements and contribute to their jerkiness and unnaturalness.</p>
<p>Guy Hoffman was one of those researchers who realised that eliciting emotional response has more to do with how a robot moves than how it looks. Hoffman was initially inspired by <a href="http://www.pixar.com/">Pixar’s</a> animated<a href="http://en.wikipedia.org/wiki/Luxo_Jr."> short film</a> that featured a pair of desk lamps, who, despite their non-anthropomorphic appearance managed to provoke a strong emotional response exclusively by means of right timing and sound effects.</p>
<p>His subsequent experiences with computer animation in combination with his enthusiasm in robotics led him to <a href="http://web.mit.edu/">MIT</a> where he created <a href="http://alumni.media.mit.edu/~guy/aur/">AUR</a>, a real-world robotic counterpart of Pixar’s lamp, capable of quietly assisting a human based on anticipating his movements rather than providing a straightforward calculated response. Thanks to AUR’s smooth and obedient behaviour, people who interacted with the lamp had a more positive and fulfilling emotional experience.</p>
<div id="attachment_5098" style="width: 305px" class="wp-caption alignright"><a href="http://csnblog.specs-lab.com/wp-content/uploads/2014/03/Shimon.jpg" rel="attachment wp-att-5098"><img class="wp-image-5098     " alt="Shimon robot " src="http://csnblog.specs-lab.com/wp-content/uploads/2014/03/Shimon.jpg" width="295" height="168" /></a><p class="wp-caption-text">Shimon can improvise music together with human muscicians</p></div>
<p>According to Hoffman, robotic intelligence can be essentially classified either as a traditional “calculated” intelligence that works in a chess-like manner or a more intuitive “adventurous” intelligence that tries to anticipate its partner’s movements. Anticipating the full range of movement, however, is tricky and Hoffman’s robots still tend to commit more mistakes along the way. Even so, studies demonstrate that people prefer such less perfect robots to their more accurate, but less understanding twins.</p>
<div id="attachment_5099" style="width: 239px" class="wp-caption alignleft"><a href="http://csnblog.specs-lab.com/wp-content/uploads/2014/03/31-Travis.jpg" rel="attachment wp-att-5099"><img class=" wp-image-5099    " alt="Travis, a robotic speaker dock released in 2012" src="http://csnblog.specs-lab.com/wp-content/uploads/2014/03/31-Travis.jpg" width="229" height="143" /></a><p class="wp-caption-text">Travis, a robotic speaker dock released in 2012</p></div>
<p>With one of his latest robotic creations<a href="http://www.gtcmt.gatech.edu/research-projects/shimon"> Shimon</a>, Hoffman ventured into the world of music improvisation, where he tried to apply the same principles of fluent collaboration. Why music improvisation? Because it is a time-critical interaction that Hoffman saw as an ideal ground to test his ideas. Shimon is basically a robotic <a href="http://en.wikipedia.org/wiki/Marimba">marimba</a> virtuoso that can jam with human musicians in real time.</p>
<p>&nbsp;</p>
<p>You can also check out <a href="http://www.gtcmt.gatech.edu/research-projects/travis">Travis</a> (aslo Shimi), a cute speaker dock released by Hoffman in 2012, which not only plays music, but also enjoys it himself.</p>
]]></content:encoded>
			<wfw:commentRss>https://csnblog.specs-lab.com/2014/03/24/what-robotics-learned-from-pixar/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>The Last Moment Robot</title>
		<link>https://csnblog.specs-lab.com/2012/07/09/4147/</link>
		<comments>https://csnblog.specs-lab.com/2012/07/09/4147/#comments</comments>
		<pubDate>Mon, 09 Jul 2012 08:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Michelle Wilson]]></dc:creator>
				<category><![CDATA[Robots and Society]]></category>
		<category><![CDATA[Brown University Science Center]]></category>
		<category><![CDATA[Dan Chen]]></category>
		<category><![CDATA[human-robot interaction]]></category>
		<category><![CDATA[Last moment robot]]></category>
		<category><![CDATA[Rhode Island School of Design]]></category>
		<category><![CDATA[Robot Companions]]></category>

		<guid isPermaLink="false">http://www.robotcompanions.eu/blog/?p=4147</guid>
		<description><![CDATA[It&#8217;s OK if this gives you the creeps If you think this kind of robot may be taking things a step too far, its creator Dan Chen would be pleased he&#8217;s gotten his point across. For starters, this robot isn&#8217;t &#8230; <a href="https://csnblog.specs-lab.com/2012/07/09/4147/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><strong>It&#8217;s OK if this gives you the creeps</strong><br />
<iframe src="http://www.youtube.com/embed/T8PNzA2S6EY" frameborder="0" width="560" height="349"></iframe></p>
<p>If you think this kind of robot may be taking things a step too far, its creator <a title="Dan Chen" href="http://www.pixedge.com/lastmoment" target="_blank">Dan Chen </a>would be pleased he&#8217;s gotten his point across. For starters, this robot isn&#8217;t actually being used for the application shown in the video above. In fact, the bed and fluorescent lit room are nothing more than props used to create a hospital-like environment within this interactive installation.<br />
<span id="more-4147"></span><br />
Accompanied by someone dressed as a doctor, viewers of the installation are able to enter the room, one at a time, taking a turn to lie in the hospital bed. At this point, the pseudo-doctor asks for their permission to place their arm under the Last Moment Robot&#8217;s mechanical caress. The ¨doctor¨then leaves the room and the robot begins to gently stroke the ¨patient&#8217;s¨arm as the LED screen reads ¨end of life detected¨and a soothing script of comforting words ensue.</p>
<p>While the use of robotic pets in hospital care has been shown to help some people cope with stress and isolation, Chen cautions against trying to fool people into a false experience: ¨With my own robots, I use generic patterns of behavior to suggest at our desire for comfort and highlight the human need for intimacy. The design of my robots is honest with its function. Using no fancy adornments, I do not attempt to disguise the robots or portray them as anything but what they are.¨</p>
<p>Chen further states that he thinks his devices could ¨serve as a stepping stone or learning tool to create deeper and more meaningful human to human relationships and build a stronger and more supportive community. Because my robots look more like appliances, the user must jump a mental gap in order to feel intimacy with the device. In the process of making this jump, I want the user to realize that the possibility of a real, deep relationship is not fully reproducible through imagination or even robotics. These are only temporary solutions.¨</p>
<p>Nevertheless, Chen is a robot-lover and while the video above may make some of us feel uncomfortable, he maintains that the idea is not meant to be negative. To explain his point, he includes an excerpt from Anthony Dunne and Fiona Raby&#8217;s <em>Design Noir: The Secret Life of Electronic Objejects</em> in his recently published Masters thesis: “The idea is not to be negative, but to stimulate discussion and debate amongst designers, industry and the public about electronic technology and everyday life. This is done by developing alternative and often gently provocative artifacts which set out to engage people through humor, insight, surprise and wonder.”</p>
<p>The interactive installation has been running at the <a title="Brown University Science Centre" href="http://brown.edu/academics/science-center/" target="_blank">Brown University Science Centre</a> as well as the <a title="RISD" href="http://www.risd.edu/" target="_blank">Rhode Island School for Design</a>.  For more information on some of Chen&#8217;s fascinating work, check out the thesis he wrote for his Masters in Fine Art in Digital + Media titled: <a title="Dan Chen_thesis" href="http://www.pixedge.com/download/dan_thesis.pdf" target="_blank">File &gt; Save As &gt; Intimacy</a> which examines the question:  what is intimacy without humanity?</p>
]]></content:encoded>
			<wfw:commentRss>https://csnblog.specs-lab.com/2012/07/09/4147/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Meet the Mask-bot</title>
		<link>https://csnblog.specs-lab.com/2011/12/12/meet-the-mask-bot-not-finished/</link>
		<comments>https://csnblog.specs-lab.com/2011/12/12/meet-the-mask-bot-not-finished/#comments</comments>
		<pubDate>Mon, 12 Dec 2011 14:21:04 +0000</pubDate>
		<dc:creator><![CDATA[Michelle Wilson]]></dc:creator>
				<category><![CDATA[Asia]]></category>
		<category><![CDATA[Ethics]]></category>
		<category><![CDATA[Europe]]></category>
		<category><![CDATA[Robots and Research]]></category>
		<category><![CDATA[Robots and Society]]></category>
		<category><![CDATA[Robots Around the World]]></category>
		<category><![CDATA[Dr. Takaaki Kuratate]]></category>
		<category><![CDATA[Gordon Cheng]]></category>
		<category><![CDATA[human-robot interaction]]></category>
		<category><![CDATA[Institute of Cognitive Systems]]></category>
		<category><![CDATA[Mask-bot]]></category>
		<category><![CDATA[Robot Companions for Citizens]]></category>
		<category><![CDATA[Talking heads]]></category>
		<category><![CDATA[Technische University]]></category>

		<guid isPermaLink="false">http://www.robotcompanions.eu/blog/?p=2668</guid>
		<description><![CDATA[it&#8217;s more than just a pretty face&#8230; At first glance it&#8217;s a generic plastic mask fixed in front of a projector. Switch it on and you’re looking at the most realistic ¨talking head¨ yet. Researchers from the Institute of Cognitive &#8230; <a href="https://csnblog.specs-lab.com/2011/12/12/meet-the-mask-bot-not-finished/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><strong>it&#8217;s more than just a pretty face&#8230;</strong><br />
<iframe src="http://www.youtube.com/embed/oFp1hpH25oI" frameborder="0" width="560" height="349"></iframe></p>
<p>At first glance it&#8217;s a generic plastic mask fixed in front of a projector. Switch it on and you’re looking at the most realistic ¨talking head¨ yet. Researchers from the <a title="ICS" href="http://www.ics.ei.tum.de/" target="_blank">Institute of Cognitive Systems (ICS) at the Technische University</a> in Munich have collaborated with the <a title="AIST" href="http://www.aist.go.jp/aist_e/about_aist/index.html" target="_blank">National Institute of Advanced Industrial Science and Technology in Japan</a> (AIST) to create a life-sized talking head, the Mask-bot.<br />
<span id="more-2668"></span></p>
<p>While talking heads already exist, this one is the first to project a 3D image of a human face on a plastic mask, making it appear more realistic than previous 3D heads that display cartoon-like faces. However, Mask-bot&#8217;s more than just a pretty face, it’s a rather articulate artifact- when one of the Mask-bot&#8217;s creators,  <a title="Takaaki Kuratate" href="http://www.ics.ei.tum.de/index.php?id=9" target="_blank">Dr. Takaaki Kuratate,</a> says the word ¨rainbow¨ Mask-bot responds with a concrete explanation of the phenomenon “When the sunlight strikes raindrops in the air, they act like a prism and form a rainbow”. Via computer control, the Mask-bot is capable of speech and highly realistic facial movements.</p>
<p>Mask-bot can readily be applied in a variety of settings including events like video conferences. It is also a powerful text to speech converter, able to reproduce content typed via keyboard – in English, Japanese and soon German. Above and beyond these applications, its creators hope that Mask-bot will serve as a platform to help further investigate human-robot interaction.</p>
<p><a title="Gordon Cheng" href="http://www.ics.ei.tum.de/index.php?id=9" target="_blank">Profesor Gordon Cheng</a>, chair of ICS at the Technische University, is a robotocist that is highly involved in this type of research. He is also a current consortium member of <a title="Robot Companions for Citizens" href="http://www.robotcompanions.eu/" target="_blank">Robot Companions for Citizens</a>, a <a title="FET flagships" href="http://cordis.europa.eu/fp7/ict/programme/fet/flagship/" target="_blank">European initiative</a> that aims to develop and safely deploy robots to assist us in our daily living. Due to the fact that robots are becoming increasingly present in societies around the world, robotocists who are conscientious about their work believe it&#8217;s critical to assess all aspects of human-robot interaction.</p>
]]></content:encoded>
			<wfw:commentRss>https://csnblog.specs-lab.com/2011/12/12/meet-the-mask-bot-not-finished/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
