For graduate level state of the art research and lists of publications, refer to the individual faculty websites.
The purpose of this gallery is to show prospective undergraduate students
- some interesting past Bachelors theses done by students under IMS faculty,
- work done in past and current EU and DFG funded projects, to which undergraduate students also contributed in one way or other, and,
- results of competitions in which teams from Jacobs IMS research groups participated.
EU-FP7 Project RobLog : Final Industrial Demonstration at Vollers (March 2015)
This sacks-unloading robot is called “Empticon”. It recognizes and localizes piled sacks in the scene using the perception software of the Jacobs Robotics team. The 3D location of a recognized sack is then used to plan and execute motions of the robot links to unload the sack. The Empticon hardware was designed by the Danish company QUBIQA and the institute BIBA at Uni Brmen.
This work was done as part of the EU-funded project RobLog.
Perception in Cluttered Environments: The RobLog Advanced Scenario (March 2015)
This robot, located at the institute BIBA in Uni-Bremen, recognizes and localizes 3D objects in the scene using the perception software of the Jacobs Robotics team. The 3D location of a recognized object is then used to plan and execute motions of the robot links to unload the object. This work was done as part of the EU-funded project RobLog.
Jacobs Team Wins Second Place in the International Conference on Robotics & Automation 2011 “Solutions in Perception Challenge”
The team consisted of Narunas Vaskevicius, Alexandru Ichim (then an undergrad), Prof. Kaustubh Pathak, and Prof. Andreas Birk. Berkeley won the first prize and Stanford, the third.
Bachelors Thesis: Teaching a Recurrent Neural Network to Accompany Music
Work done by Tomas Pllaha (class of 2014) in the Machine Learning group of Prof. Herbert Jaeger. Unmute your speakers.
Human Motion Patterns Learnt with Conceptor-Controlled Neural Network
A recurrent neural network learnt to re-generate a variety of motion patterns. The patterns were each defined by joint angle trajectories with 61 degrees of freedom in total. For generating the video, the trained neural pattern generator was controlled “top-down” by activating and morphing a sequence of conceptors, each of which represented one of the learnt “prototype” patterns (colored activated bars inserted in the top of video show activation of conceptors). Credits: training patterns were distilled from human motion capture sequences obtained from the CMU mocap repository (http://mocap.cs.cmu.edu ); mocap data processing and visualization was done using the mocap toolbox from the University of Jyväskylä (https://www.jyu.fi/hum/laitokset/musi…). See Prof. Herbert Jaeger’s Conceptors website for more information.
3D Mapping
A virtual fly-through of a 3D map of the Bremen City’s historic center created by registering point-cloud data generated by a 3D laser-scanner.
Segmentation in RGB-D Images
RGB-D (color + 3D information) images were collected from an ASUS Xtion Pro sensor mounted on a robot end-effector as it slowly moved closer to the scene. The images were segmented automatically into homogeneous patches which could represent objects of interest in the scene. The patches were then sent to the perception pipeline for further merging and interpretation.
Pointing-Out Recognized Objects
This Husky robot has a Schunk arm mounted on it. The rotating sensor is a Velodyne Lidar. An ASUS Xtion PRO is mounted on the end-effector of the Schunk and produces RGBD (color + 3D information) images. Using these images, the objects in the scene are recognized and localized by our software. The arm is then moved to point to each recognized object in turn. At the upper-right, you see a window visualizing what is happening using 3D models of the robot, the objects, and the 3D point-clouds.