knowledge through interaction with the environment. The knowledge can
be acquired only if suitable perception-action capabilities are
present: a robotic system has to be able to detect, attend to and
manipulate objects in the environment as well as interact with people
and other robots. We present our longterm work in the area of vision
based sensing and control with specific objectives on attention,
segmentation and learning. By attention, we consider the capability
to understand where to direct sensory system, i.e. to what part of the
environment, in time or space. By segmentation we consider clustering
or connecting information into more complex or higher-level entities.
The implication of the learning is two-fault, first a robot should be
able to adapt and learn different concepts from experience.
Related to visual sensing, we present a stereo based active vision
system framework where aspects of Top-down and Bottom-up attention and
foveated attention are put into focus and demonstrate how a mechanical
system can simplify visual processing. In addition, we show how the
system enables robot to perform object grasping and manipulations
tasks. In regard to learning, we are motivated by the learning process
of humans. In specific our work for creating learning models is
motivated by probabilistic reasoning. We build models which are
capable to reason from partial knowledge and are aware of the
uncertainty of their decisions. Further, even though a large part of
our reasoning capabilities are learned from experience a significant
portion of the early reasoning is hard-wired in order to mimic such we
are studying bottom-up models. In specific models applied to visual
sensory and motion analysis. By integrating these systems into a
single framework we are capable to exploit but also further understand
the implications of Attention, Segmentation and Learning.
See more of: Body and Machine
See more of: Seminars