-
Glette, Kyrre
(2020).
Evolutionary algorithms for intelligent robots.
-
T?rresen, Jim
(2019).
Kunstig intelligens – hvem, hva og hvor.
(Eng. Artificial Intelligence – who, what and where).
-
T?rresen, Jim
(2019).
Making Robots Adaptive and Preferable to Humans.
-
T?rresen, Jim
(2019).
Intelligent Robots and Systems in Real-World Environment.
-
-
Glette, Kyrre
(2019).
Kunstig intelligens for tilpasningsdyktige roboter.
-
Glette, Kyrre; Nygaard, T?nnes Frostad & Vogt, Yngve
(2019).
Her er universitetets nest selvl?rende robot.
[Business/trade/industry journal].
Teknisk ukeblad.
-
T?rresen, Jim
(2019).
Intelligent and Adaptive Robots in Real-World Environment.
Show summary
240862
259293
247697
288285
262762
-
T?rresen, Jim
(2019).
Design and Control of Robots for Real-World Environment.
-
T?rresen, Jim
(2019).
Artificial Intelligence and Applications in Health and Care
.
-
T?rresen, Jim
(2019).
Sensing Human State with Application in Older People Care and Mental Health Treatment.
-
T?rresen, Jim
(2019).
Supporting Older People with Robots for Independent Living.
Show summary
247697
288285
262762
-
Ellefsen, Kai Olav & T?rresen, Jim
(2019).
Evolutionary Robotics: Automatic design of robot bodies and control.
-
T?rresen, Jim
(2019).
Hva er kunstig intelligens?
-
T?rresen, Jim
(2019).
Future and Ethical Perspectives of Robotics and AI.
-
Nygaard, T?nnes Frostad; Martin, Charles Patrick; T?rresen, Jim & Glette, Kyrre
(2019).
Self-Modifying Morphology Experiments with DyRET: Dynamic Robot for Embodied Testing.
-
Teigen, Bj?rn Ivar; Ellefsen, Kai Olav & T?rresen, Jim
(2019).
A Categorization of Reinforcement Learning Exploration Techniques Which Facilitates Combination
of Different Methods.
-
Ellefsen, Kai Olav & T?rresen, Jim
(2019).
Self-Adapting Goals Allow Transfer of Predictive Models to New Tasks.
-
Nordmoen, J?rgen Halvorsen; Nygaard, T?nnes Frostad; Ellefsen, Kai Olav & Glette, Kyrre
(2019).
Evolved embodied phase coordination enables robust quadruped robot locomotion
.
-
Nygaard, T?nnes Frostad; Nordmoen, J?rgen Halvorsen; Ellefsen, Kai Olav; Martin, Charles Patrick; T?rresen, Jim & Glette, Kyrre
(2019).
Experiences from Real-World Evolution with DyRET: Dynamic Robot for Embodied Testing.
-
Ellefsen, Kai Olav; Huizinga, Joost & T?rresen, Jim
(2019).
Guiding Neuroevolution with Structural Objectives.
-
Becker, Artur; Herrebr?den, Henrik; Gonzalez Sanchez, Victor Evaristo; Nymoen, Kristian; Dal Sasso Freitas, Carla Maria & T?rresen, Jim
[Show all 7 contributors for this article]
(2019).
Functional Data Analysis of Rowing Technique Using Motion Capture Data.
Show summary
We present an approach to analyzing the motion capture data ofrowers using bivariate functional principal component analysis(bfPCA). The method has been applied on data from six elite rowersrowing on an ergometer. The analyses of the upper and lower bodycoordination during the rowing cycle revealed significant differences between the rowers, even though the data was normalized toaccount for differences in body dimensions. We make an argumentfor the use of bfPCA and other functional data analysis methods forthe quantitative evaluation and description of technique in sports.
-
T?rresen, Jim; Glette, Kyrre & Ellefsen, Kai Olav
(2019).
Adaptive Robot Body and Control for Real-World Environments.
-
T?rresen, Jim; Glette, Kyrre & Ellefsen, Kai Olav
(2019).
Intelligent, Adaptive Robots in Real-World Scenarios.
-
Ellefsen, Kai Olav
(2019).
Hva Kan Roboter L?re av Biologisk Liv?
-
Nordmoen, J?rgen Halvorsen & Fadelli, Ingrid
(2019).
A new method to enable robust locomotion in a quadruped robot.
[Internet].
TechXplore.
-
N?ss, Torgrim Rudland; T?rresen, Jim & Martin, Charles Patrick
(2019).
A Physical Intelligent Instrument using Recurrent Neural Networks.
-
Martin, Charles Patrick & T?rresen, Jim
(2019).
An Interactive Musical Prediction System with Mixture Density Recurrent Neural Networks.
-
Martin, Charles Patrick & Torresen, Jim
(2019).
An Interactive Music Prediction System with Mixture Density Recurrent Neural Networks.
-
Martin, Charles Patrick; N?ss, Torgrim Rudland; Faitas, Andrei & Baumann, Synne Engdahl
(2019).
Session on Musical Prediction and Generation with Deep Learning.
-
Nygaard, T?nnes Frostad; Nordmoen, J?rgen Halvorsen; Martin, Charles Patrick; T?rresen, Jim & Glette, Kyrre
(2019).
Lessons Learned from Real-World Experiments with
DyRET: the Dynamic Robot for Embodied Testing.
-
Glette, Kyrre
(2019).
Kunstig intelligens for tilpasningsdyktige roboter
.
-
Faitas, Andrei; Baumann, Synne Engdahl; Torresen, Jim & Martin, Charles Patrick
(2019).
Generating Convincing Harmony Parts with Simple Long Short-Term Memory Networks.
-
Miseikis, Justinas; Brijacak, Inka; Yahyanejad, Saeed; Glette, Kyrre; Elle, Ole Jacob & T?rresen, Jim
(2019).
Two-Stage Transfer Learning for Heterogeneous Robot Detection and 3D Joint Position Estimation in a 2D Camera Image Using CNN.
-
Miura, Jun & T?rresen, Jim
(2019).
Intelligent Robot Technologies for Care and Lifestyle Support .
-
Comba, Joao Luiz Dihl & T?rresen, Jim
(2019).
Visual Data Analysis of Unstructured and Big Data.
-
Rohlfing, Katharina J. & T?rresen, Jim
(2019).
Explainability: an interactive view.
-
T?rresen, Jim
(2018).
Remote Lab and Applications for High Performance and Embedded Architectures.
-
T?rresen, Jim
(2018).
Artificial Intelligence – State-of-the-art.
-
T?rresen, Jim
(2018).
Kunstig Intelligens – L?rende og tilpasningsdyktig teknologi.
-
T?rresen, Jim
(2018).
Roboter kommer n?rmere – skal vi glede eller grue oss?
-
Martin, Charles Patrick; Jensenius, Alexander Refsum & T?rresen, Jim
(2018).
Composing an ensemble standstill work for Myo and Bela.
Show summary
This paper describes the process of developing a standstill performance work using the Myo gesture control armband and the Bela embedded computing platform. The combination of Myo and Bela allows a portable and extensible version of the standstill performance concept while introducing muscle tension as an additional control parameter. We describe the technical details of our setup and introduce Myo-to-Bela and Myo-to-OSC software bridges that assist with prototyping compositions using the Myo controller.
-
Martin, Charles Patrick; Xambó, Anna; Visi, Federico; Morreale, Fabio & Jensenius, Alexander Refsum
(2018).
Stillness under Tension.
Show summary
Stillness Under Tension? is an ensemble standstill work for Myo gesture control armband and Bela embedded music platform. Humans are incapable of standing completely still due to breathing and other involuntary micromotions. This work explores the expressive space of standing still through an inverse action-sound mapping: less movement leads to more sound. Four performers stand as still as possible on stage, each wearing a Myo armband connected to a Bela embedded sound processing platform. The Myo is used to measure the performers movement, and the muscle activity in their forearm which they can use--both voluntarily and involuntarily--to control a synthesised sound world. Each performer uses one Myo and Bela in a musical space defined by their physical position and posture while standing still.
-
Jensenius, Alexander Refsum; Martin, Charles Patrick; Bjerkestrand, Kari Anne Vadstensvik & Johnson, Victoria
(2018).
Stillness under Tension.
-
Ellefsen, Kai Olav & T?rresen, Jim
(2018).
Evolutionary Robotics: Automatic design of robot controllers and bodies.
-
Ellefsen, Kai Olav
(2018).
Evolusjon?r Robotikk: Automatisk design og kontroll av roboter.
-
S?yseth, Vegard D?nnem; Nygaard, T?nnes Frostad; Martin, Charles Patrick; Uddin, Md Zia & Ellefsen, Kai Olav
(2018).
ROBIN-Stand ved Cutting Edge 2018.
-
(2018).
Real-World Evolution Adapts Robot Morphology and Control to Hardware Limitations.
-
Nygaard, T?nnes Frostad; S?yseth, Vegard D?nnem; Nordmoen, J?rgen Halvorsen & Glette, Kyrre
(2018).
Stand with the DyRET robot.
-
-
-
-
Martin, Charles Patrick
(2018).
MicroJam.
Show summary
MicroJam is a mobile app for sharing tiny touch-screen performances. Mobile applications that streamline creativity and social interaction have enabled a very broad audience to develop their own creative practices. While these apps have been very successful in visual arts (particularly photography), the idea of social music-making has not had such a broad impact. MicroJam includes several novel performance concepts intended to engage the casual music maker and inspired by current trends in social creativity support tools. Touch-screen performances are limited to 5-seconds, instrument settings are posed as sonic "filters", and past performances are arranged as a timeline with replies and layers. These features of MicroJam encourage users not only to perform music more frequently, but to engage with others in impromptu ensemble music making.
-
-
-
-
-
Martin, Charles Patrick
(2018).
Predictive Music Systems for Interactive Performance.
Show summary
Automatic music generation is a compelling task where much recent progress has been made with deep learning models. But how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users?
Musical performance requires prediction to operate instruments, and perform in groups. Predictive models can help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning can allow data-driven models with a long memory of past states.
This process could be termed "predictive musical interaction", where a predictive model is embedded in a musical interface, assisting users by predicting unknown states of musical processes. I’ll discuss a framework for predictive musical interaction including examples from our lab, and consider how this work could be applied more broadly in HCI and robotics. This talk will cover material from this paper: https://arxiv.org/abs/1801.10492
-
Martin, Charles Patrick; Glette, Kyrre & T?rresen, Jim
(2018).
Creative Prediction with Neural Networks.
Show summary
The goal of this tutorial is to apply predictive machine learning models to creative data. The focus of the tutorial will be recurrent neural networks (RNNs), deep learning models that can be used to generate sequential and temporal data. RNNs can be applied to many kinds of creative data including text and music. They can learn the long-range structure from a corpus of data and “create” new sequences by predicting one element at a time. When embedded in a creative interface, they can be used for “predictive interaction” where a human collaborates with, influences, and is influenced by a generative neural network.
We will walk through the fundamental steps for training creative RNNs using live-coded demonstrations with Python code in Jupyter Notebooks. These steps are: collecting and cleaning data, building and training an RNN, and developing predictive interactions. We will also have live demonstrations and interactive live-hacking of our creative RNN systems!
You’re welcome to bring a laptop with python to the tutorial and load up our code examples, or to follow along with us on the screen!
-
Garcia Ceja, Enrique Alejandro; Ellefsen, Kai Olav; Martin, Charles Patrick & T?rresen, Jim
(2018).
Prediction, Interaction, and User Behaviour.
Show summary
The goal of this tutorial is to apply predictive machine learning models to human behaviour through a human computer interface. We will introduce participants to the key stages for developing predictive interaction in user-facing technologies: collecting and identifying data, applying machine learning models, and developing predictive interactions. Many of us are aware of recent advances in deep neural networks (DNNs) and other machine learning (ML) techniques; however, it is not always clear how we can apply these techniques in interactive and real-time applications. Apart from well-known examples such as image classification and speech recognition, what else can predictive ML models be used for? How can these computational intelligence techniques be deployed to help users?
In this tutorial, we will show that ML models can be applied to many interactive applications to enhance users’ experience and engagement. We will demonstrate how sensor and user interaction data can be collected and investigated, modelled using classical ML and DNNs, and where predictions of these models can feed back into an interface. We will walk through these processes using live-coded demonstrations with Python code in Jupyter Notebooks so participants will be able to see our investigations live and take the example code home to apply in their own projects.
Our demonstrations will be motivated from examples from our own research in creativity support tools, robotics, and modelling user behaviour. In creativity, we will show how streams of interaction data from a creative musical interface can be modelled with deep recurrent neural networks (RNNs). From this data, we can predict users’ future interactions, or the potential interactions of other users. This enables us to “fill in” parts of a tablet-based musical ensemble when other users are not available, or to continue a user’s composition with potential musical parts. In user behaviour, we will show how smartphone sensor data can be used to infer user contextual information such as physical activities. This contextual information can be used to trigger interactions in smart home or internet of things (IoT) environments, to help tune interactive applications to user’s needs, or to help track health data.
-
T?rresen, Jim
(2018).
Intelligent Systems for Medical and Healthcare Applications.
-
T?rresen, Jim
(2018).
N?r etikk betyr alt.
Dagens n?ringsliv.
ISSN 0803-9372.
-
T?rresen, Jim; Garcia Ceja, Enrique Alejandro; Ellefsen, Kai Olav & Martin, Charles Patrick
(2018).
Equipping Systems with Forecasting Capabilities .
-
-
T?rresen, Jim
(2018).
Ethical Robots and Autonomous Systems.
-
T?rresen, Jim
(2018).
Kunstig intelligens – hvem, hva og hvor.
-
T?rresen, Jim
(2018).
Artificial Intelligence Applied for Real-World Systems.
-
N?ss, Torgrim Rudland; Martin, Charles Patrick & T?rresen, Jim
(2019).
A Physical Intelligent Instrument using Recurrent Neural Networks.
Universitetet i Oslo.
-
T?rresen, Jim; Teigen, Bj?rn Ivar & Ellefsen, Kai Olav
(2018).
An Active Learning Perspective on Exploration in Reinforcement Learning.
Universitetet i Oslo.
-
-
Fjeld, Matias Hermanrud & T?rresen, Jim
(2018).
3D Spatial Navigation in Octrees with Reinforcement Learning.
Universitetet i Oslo.
-
Wallace, Benedikte & Martin, Charles Patrick
(2018).
Predictive songwriting with concatenative accompaniment.
Universitetet i Oslo.