{"id":1622,"date":"2009-05-06T14:43:57","date_gmt":"2009-05-06T10:43:57","guid":{"rendered":"https:\/\/itp.nyu.edu\/opportunities\/2009\/05\/06\/job-paris-3d\/"},"modified":"2009-05-06T14:43:57","modified_gmt":"2009-05-06T10:43:57","slug":"job-paris-3d","status":"publish","type":"post","link":"https:\/\/itp.nyu.edu\/opportunities\/2009\/05\/06\/job-paris-3d\/","title":{"rendered":"Job: Paris: 3D"},"content":{"rendered":"<div>\n<div>\n<div>\n<p><b><span>Position Opening<\/span><\/b><\/p>\n<p>&nbsp;<\/span><\/p>\n<p><b><span>Subject:<\/span><\/b><span> 3D human animation model<\/span><\/p>\n<p><span>&nbsp;<\/span><\/p>\n<p><span>We are look=<br \/>\ning for a candidate<br \/>\nwith experience in 3D computer graphics. <\/span><\/p>\n<p><span>&nbsp;<\/span><\/p>\n<p><b><span>Context:<\/span><\/b><\/p>\n<p><span>The project<br \/>\ntakes place within the ECA system Greta. This ECA system accepts as input a<br \/>\ntext to be said by the agent. The text has been enriched with information o=<br \/>\nn<br \/>\nthe manner the text ought to be said (i.e. with which communicative acts it<br \/>\nshould be said). The behavioral engine computes the synchronized verbal and<br \/>\nnonverbal behaviors of the agent. The animation module follows the MPEG-4 a=<br \/>\nnd<br \/>\nthe H-Anim standards. <\/span><span>This w=<br \/>\nork is<br \/>\npart of the EU project CALLAS (<\/span><span><span>http:\/\/www.callas-newmedia.eu\/<\/span><\/span><span>). CALLAS aims to provide a new paradigm for investigating a more<br \/>\ncomprehensive set of emotions in multimodal interfaces tailored to New Medi=<br \/>\na<br \/>\nenvironments, changing the way we perceive contemporary and future media<br \/>\napplications.<\/span><\/p>\n<p><span>&nbsp;<\/span><\/p>\n<p><span>Work to<br \/>\nbe done:<\/span><\/b><\/p>\n<p><span>The<br \/>\nanimation module designed so far has implemented arm movements only. It nee=<br \/>\nds<br \/>\nto be extended to the full upper body, in particular to the torso and shoul=<br \/>\nder.<br \/>\nThe body animation needs also to produce more fluid animations.<\/p>\n<p><span>&nbsp;<\/span><\/p>\n<p><span lang=\"3D=\">The<br \/>\nanimation concerns communicative movement. Such a movement is decomposed in<br \/>\nvarious phases (preparation of the movement, stroke, hold, retraction). As =<br \/>\nbody<br \/>\nmovements and speech are synchronized with each other, the animation module<br \/>\nneeds to be flexible and allows for adaptation of the movement. =<br \/>\n<\/span><\/p>\n<p><span>&nbsp;<\/span><\/p>\n<p><span>The<br \/>\nanimation needs to work in real-time.<\/span><\/p>\n<p><span>The<br \/>\nanimation will be driven through a language command of the type<br \/>\n=91move-shoulder-up=92 or =91bend-torso-forward=92. This language follows t=<br \/>\nhe BML<br \/>\nspecification: <\/span><\/p>\n<p><span><span>http:\/\/wiki.mindmakers.org\/proj=<br \/>\nects:bml:draft1.0<\/span><\/span><\/p>\n<p><span>&nbsp;<\/span><\/p>\n<p><b><span>=<br \/>\nPre-requisite<\/span><\/b><span>:<br \/>\nC++, 3D computer graphics<\/span><\/p>\n<p><b><span lang=\"3D=\">Project<br \/>\nLength<\/span><\/b><span>: 1=<br \/>\n2 months<\/span><\/p>\n<p><b><span>Place<\/span><\/b><span>: TELECOM ParisTech<\/span><\/p>\n<p><b><span>=<br \/>\nStipend<\/span><\/b><span>: a=<br \/>\nround 2000 euros depending on<br \/>\napplicant=92s qualification<\/span><\/p>\n<p><b><span>Contact<\/span><\/b><span style=\"3D=\">: <\/span><\/p>\n<p><span>Catherine Pelachaud<\/span><\/p>\n<p><span>catherine.pelachaud@te=<br \/>\nlecom-paristech.fr<\/span><\/p>\n<p><span><span>http:=<br \/>\n\/\/www.tsi.enst.fr\/~pelachau<\/span><\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n<p><BR><br \/>\n&#8212;<br \/>\n2be296b3d9de53741@lists.nyu.edu<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Position Opening &nbsp; Subject: 3D human animation model &nbsp; We are look= ing for a candidate with experience in 3D computer graphics. &nbsp; Context: The&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3,8],"tags":[],"class_list":["post-1622","post","type-post","status-publish","format-standard","hentry","category-job","category-listserv","entry"],"_links":{"self":[{"href":"https:\/\/itp.nyu.edu\/opportunities\/wp-json\/wp\/v2\/posts\/1622","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/itp.nyu.edu\/opportunities\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/itp.nyu.edu\/opportunities\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/itp.nyu.edu\/opportunities\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/itp.nyu.edu\/opportunities\/wp-json\/wp\/v2\/comments?post=1622"}],"version-history":[{"count":0,"href":"https:\/\/itp.nyu.edu\/opportunities\/wp-json\/wp\/v2\/posts\/1622\/revisions"}],"wp:attachment":[{"href":"https:\/\/itp.nyu.edu\/opportunities\/wp-json\/wp\/v2\/media?parent=1622"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/itp.nyu.edu\/opportunities\/wp-json\/wp\/v2\/categories?post=1622"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/itp.nyu.edu\/opportunities\/wp-json\/wp\/v2\/tags?post=1622"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}