# IFAC blog page

Over the last three decades, the pervasiveness of engineering communication systems, enabled by the development of cheap sensing and computation capabilities, lightweight battery storage and low power actuation systems, has exploded. Concurrently, advances in health care and medicines have and will continue to lead to populations with ageing demographics throughout the industrialised world.

One impact is that the current generation of retirees will live longer and is arguably the first to have grown up with communication and automation technologies as an integral part of everyday life. As a consequence, there are significant opportunities to develop assistive technologies based around automation systems that provide both better quality of life and lower medical costs as user acceptance of the developed technologies will likely be high. Of critical importance is the engagement with the likely users during the development process to ensure that interfaces and social aspects are properly identified. Of particular relevance is the need to avoid overly intrusive approaches, by capitalizing on the embedded nature of suitable technology, and to undertake co-design with the target groups.

The opportunities for automation and ICT technologies then range from non-intrusive detection of incidents, subsequent intervention through partial or potentially complete mitigation of damage through appropriate actions, to assistance in recovery from incidents through appropriate rehabilitation.

As an illustration of the potential benefits of seamless integration of automation, (and one of many that could be highlighted) we can consider falls in the elderly. These represent by far the most common preventable incident with serious consequence for over 65s, and account for a clear majority of hospitalisations in this age group.

From a detection standpoint, there is an opportunity to leverage smartphone uptake (recent surveys have indicated the vast majority of the elderly today own smart phones). The integration of multiple sensors into phones has been already exploited by a number of apps for background fall detection. In essence, these use relatively simple algorithms with thresholds set on acceleration measurements from the inertial measurement units integrated in the phones to trigger the detection of an event. With the introduction of smart watches carrying their own sensors false classification rates will be lowered, as it is easier to prevent false classifications if sensor position is known to be constant with respect to the user’s body.

Naturally, the detection of a potentially injurious event should be coupled with the ability to mitigate the damage. Already in the existing applications, detection of a potential fall via the phone is used to trigger an alert sent to a designated person(s) along with GPS data describing the wearer’s location, typically with a short delay after a fall is detected, where the owner has the opportunity to correct for a false positive classification. Smart phones can also automatically undertake a tiered call response cycle under these conditions, from family, to emergency services.

One of the most serious consequences of falls amongst the elderly is fracture of the femoral head – a more common occurrence as bone density and reaction time of the person (which would aid in in situ fall mitigation) are typically both diminished with age. Furthermore, the complications associated with the surgical procedure for femoral repair lead to significantly increased mortality risk in the elderly. Mitigating the effect of a fall through cushioning the femur in at-risk areas is possible through passive protection, however active devices operating on an air-bag principle activated on fall detection through smart sensing and classification may provide a less intrusive system with better damage mitigation.

Of even greater potential than mitigating a fall is the design and development of devices that can prevent it. Such a system requires early detection before action is taken, thereby relying on more sophisticated sensor feedback and smarter integration with the wearer’s natural actuation capability, so as to provide assistance when required. Already there are commercial prototype systems such as the Hybrid Assistive Limb (HAL) developed by Cyberdyne (no not (yet) the Terminator company!) aiming to do so, which rely on conventional robotic architectures to provide functionality.

Usability will only increase as the man-machine interface is continually refined. Assistive technologies include mechatronic aids, which can assist rather than entirely prevent falls: a lightweight mechanical assistive support device could achieve this, as an intelligent reconceptualisation of the once familiar polio leg braces. However the advent of wearable sensors and soft robotics offers perhaps greater potential due to reduced weight and subsequent reduced on-board power requirements, as demonstrated in prototype systems such as those under development at various research institutes.

Finally, bio-mechatronics offers great potential in targeting rehabilitation strategies towards individual patients. The opportunities include using available sensing technologies to record real world activity, which can be interpreted by clinicians to gauge and improve recovery. Assistive therapeutic aids which initially enhance patient capability through EMG feedback and transition across to systems that retrain or strengthen muscles, thereby reducing the probability of further injurious events. Such approaches may partially alleviate the need for physiotherapy to be conducted wholly onsite, thereby reducing treatment costs and also improving recovery rates.

So while medical and health sciences can claim some responsibility for creating the (nice) problem of an increasing ageing segment of the population, it is perhaps engineering and automation technologies that are going to play a major role in assisting that population to continue to live active and fulfilling lives. It is, however, critical that the age groups concerned play an active participatory and responsible role in the codesign of devices and automation being created for their benefit.

Article provided by:
Prof Chris Manzie
Department of Mechanical Engineering
University of Melbourne
IFAC TC 4.2 (Mechatronic Systems) and 7.1 (Automotive Control)

Traffic Control Centres (TCC) are expensive pieces of infrastructure tasked with the problem of sensing, surveying, monitoring, and actively interfering with traffic flow in road networks.

Figure 1 provides a broad overview of how a TCC operates. The controlled system is
a network of roads equipped with sensors and control effectors. Two-way flow of information from and to the field is effectuated by the IT infrastructure maintained by the TCC. Network operators are managing traffic in real time based on streams of information converging to the traffic control room. They have to decide which objectives and policies to support and how to implement them by managing the available control device.

Figure 1: schematic of a TCC.

This is a most challenging and highly complicated task, encompassing diverse hardware and software systems, which have to be operated following specific regulations and procedures, in support of policies and objectives defined by network operators or wider political bodies. The complexity of the traffic flow management problem is due to the often chaotic nature of human behaviour, the diverse needs generating the individual trips, the constraints imposed by regulations, e.g. safety, and the objectives TCC pursue, e.g. delay minimization or emissions reduction.

Different control architectures can be conceptualised for performing the same tasks. Currently, the most common architecture adopted by TCC owners is that of a centralised control structure, allowing room for decentralised operations under strong supervision. A lot of money have been invested in this kind of infrastructure resulting mostly in static networks of sensors (loop detectors, CCTV etc.) and control effectors (traffic lights, variable message signs etc.). Usually, it is within this framework that control systems for particular traffic management applications are designed.

With the advent of highly equipped vehicles and vehicle automation, Vehicle Automation and Communication Systems (VACS) are changing the system architecture of traffic management. VACS are treasure coves of information as a lot of data can be extracted that can help address a variety of needs, e.g. commercial, infotainment and traffic management. From the control engineering approach such information is of little help unless it is explicitly used for positively interfering with traffic in real time.

In this sense, VACS are becoming both the sensor and the control device. They are both the means of information collection and transmission, and of actively interfering with traffic. Operating within a highly robust, secure and high-performance communication network, static sensors and control systems will become obsolete and a memory from the past or at a best a fall-back system. Fundamentally different operational requirements are in effect compared to those of centralised architectures posing new challenges for control design of network-wide vehicular flow.

The control technology for completely automating a vehicle is largely available.
Of course there are challenges, see e.g. a previous entry in IFAC’s blog (Link here). However, going from the individual vehicle to the aggregate behaviour of several thousands of vehicles and control of their collective interaction, is an entirely different control problem and in many respects more difficult to address. A fundamental change in thinking tailored for this new road / communications infrastructure / vehicle / driver system is necessary.

Many different scenarios can be envisaged, including:

• The compulsory intervention by a TCC authority to vehicle controls. This implies that full control of the vehicle is delegated to a traffic authority. Acceleration, speed and position trajectories are decided by a higher level system supervising an area and deciding on the optimal, according to some societal notion of cost, vehicle operation. Dedicated lanes for segregating manual and autonomous vehicles could be used as well, although this is very difficult particularly for urban environments.
• Partial intervention by a TCC authority. In this case vehicle control is assumed (or partially assumed) by a traffic authority should certain conditions arise, e.g. a congested road section or around an area near the approaches of an intersection.
• Freely acting informed drivers. In this case, it is the drivers’ intelligence that takes over as a regulator of traffic under the influence of information communicated to them through an appropriate human machine interface. This scenario does not exclude the use of autonomous vehicles, but the decision of allowing a traffic authority access and control of a vehicle is left at the driver’s discretion.

#### Are you ready to give up control of your car for the sake of traffic management? Are you willing to delegate your vehicle’s control to a different authority, other than you?

Although the answer seems to be “yes” when this question arises in the context of the individual vehicle platforms, it may not be so when it is posed in the context of everyday commuting and travelling. Leaving aside institutional and legal issues, there is this question of whether people will accept losing their freedom of action operating their own car. There are situations where a “yes” or a “no” seem to be clear. When you are stuck in a solid block of congestion and you are immersed in a stop-and-go situation, it seems much preferable to either use the car as an office and work on the computer or as a TV set and watch a movie, leaving the vehicle to crawl its way to the destination. When riding in the countryside, a lot of people would respond with a “no” as they would drive manually themselves just for enjoying the experience.

But what happens when while commuting to work you believe that what is suggested or the way your vehicle is operated (lets say by a TCC) is not the best for you? It may be the best on a societal benefit level (although not necessarily so), i.e. the “common good”, but not on an individual level. Many people will answer “no” to this question, irrespective of whether we think of this as an egoistic response. Furthermore, the very notion of been forced to allow access and delegate control of an object considered private may be unacceptable by a lot of people from the general population. They cannot be neglected nor their choice be banned since they are legitimate road users. Their existence shapes the properties of the traffic flow process and hence they affect control design. In other words, there are strong cultural issues involved, which affect the efficiency of any large area traffic control design.

Designing vehicle based control systems supporting autonomous operations requires focusing primary on the individual vehicle; but designing network-wide traffic management controllers requires focusing on the broader picture of spatio-temporal traffic dynamics and on the way individual vehicles interact with other vehicles and the infrastructure. All three scenarios outlined pose daunting challenges from the technical side, even if autonomous vehicles allow us to treat them as “ballerinas” in the daily commuting dance. The scenario of freely acting informed drivers, although the most challenging of the three, seems the most appropriate, politically rewarding and easier to promote to the public.

Article provided by:
Apostolos Kotsialos apostolos.kotsialos@durham.ac.uk
School of Engineering and Computing Sciences
Durham University, United Kingdom
IFAC TC 7.4 (Transportation Systems)

Education is changing rapidly, both due to an increased understanding of pedagogy and also the potential offered by new technology to do things better.

For example, there is an increasing understanding that student activity and involvement is critical for effective learning and thus a move away from the dominant reliance on traditional lectures to increased use of more interactive engagement activities . Another part of pedagogy that is receiving great current interest is the topic of feedback, for example, what feedback do students need to support effective learning and how can this feedback be provided efficiently and rapidly?

Within engineering a classic engagement activity was paper and pen based problem solving, however this has the disadvantage of providing relative slow (wait for a tutor meeting) or fast but low quality feedback (such as right/wrong). Advances in technology and in particular universal access to powerful computing devices (phones, laptops, …) provide opportunities for staff to develop interactive learning environments which give immediate and high quality feedback thus allowing students to become more effective independent learners. It is gratifying to know that control researchers [4] are leading the global field in these developments.

The following gives some examples of how control engineers are embedding effective teaching pedagogies which encourage and facilitate student activity, reflection and independent learning. The main focus is on student activity by way of ‘free’ access to laboratory equipment so that students can easily apply their learning and experiment unhindered by rigid timetable constraints and closed laboratory instruction sheets.

#### Exploiting the Internet of Things for Control Education: virtual and remote laboratories.

The Internet of Things (IoT) is the network of physical devices which are connected to the Internet. These devices can therefore be accessed remotely: whether it is just for monitoring purposes of such objects and/or their surroundings or, moreover, even for controlling some of their aspects.

The impact, applications and importance of the IoT have been growing over the last few years, as the technology has been progressing. In Google, a search for “internet of things” gives no less than twenty-two million results. Among them, articles in pages from companies and journals as important as Forbes, Microsoft, The Guardian, Wired, Intel or Cisco can be found. In Google News, just in the last week (at the moment of writing these lines), there is news on the IoT in Fortune, Bloomberg, TheStreet or TechRadar, for example. The huge number of references to the IoT as well as the wide variety of places in which this topic is covered (from technological journals and webpages to society and financial ones), give a good idea of the general interest that exists for this concept. Also, according to a 2015 research from Tata Consultancy Services, 26 companies plan to spend $1 billion or more each on IoT initiatives this year, while, according to the McKinsey Global Institute, the IoT has a total potential economic impact of$4 trillion to \$11 trillion a year by 2025, which represents around 10% of the world economy.

Virtual and Remote Laboratories (VRLs) are part of the IoT. A RL uses laboratory equipment which is connected to the Internet so that teachers and students can operate it at distance. RLs are the most immediate application of the IoT to education, especially in those fields where experimentation is a key part of the learning process, such as it is in Control.

If you are interested in the topic we invite you to visit the UniLabs portal on virtual and remote labs (http://unilabs.dia.uned.es/). As an example below you can find the interface of the Control of the Ball and Hoop system.

#### Control of the ball and hoop system

The Ball and Hoop system is an electromechanical device consisting of a ball rolling on the rim of a hoop. The hoop is mounted on the shaft of a servomotor and can rotate about its axis. The rotation of the hoop causes an oscillatory movement of the ball around its equilibrium point. The behaviour of the ball is similar to the dynamic of a liquid inside a cylindrical container.

The main objective of this system is to control these oscillations. With this laboratory you can perform, among others, these tasks and activities:

• Study the transmission zeroes and non-minimum phase behaviours
• Velocity and position control of the hoop
• Control of deviation of the ball from its rest position

#### Take home laboratories

One obvious mechanism for improving student access to equipment is by allowing students to take equipment home, thus enabling them to perform open-ended tests at will. In recent years, data acquisition and control hardware has become relatively cheap and this is a key enabler for development of affordable laboratory equipment which can be produced in multiples of 10 or even a 100, thus allowing every student to have one!

###### Remote laboratory

Staff in the University of Sheffield have developed a platform [6] for supporting the learning and application of skills crossing topics such as state-space design, state estimation, modelling, classical control and labview which also is embedded around an application area of obvious interest of Aerospace engineers (https://www.youtube.com/watch?v=mudKnc6v07E). The hardware consists of a miniature three-degree-of-freedom (3DOF) helicopter, interfaced to a PC via a NI myDAQ. The construction allows for easy assembly, and disassembly so students can take home, or indeed use on any University computer. The entire parts cost of each kit was under £300, making it possible to provide each student with his or her own kit, on a loaned basis. The equipment has now been used by 4 different cohorts of students and the general feedback is overwhelmingly positive, with students greatly appreciating the opportunity to put advanced theory into practice upon a challenging real-world system, in a time and place of their choosing.

#### Internet Based Control Education Conference

The IFAC Workshop on Internet Based Control Education (http://ibce15.unibs.it/) was held in Brescia, Italy, from 4th to 6th November 2015, organized by the University of Brescia (Italy) in cooperation with Multisector Service and Technological Centre (CSMT), Brescia (Italy). IBCE15 has been sponsored by the IFAC Technical Committee on Control Education (TC9-4) and co-sponsored by the IFAC Technical Committee on Computers for Control (TC3-1) and by the IFAC Technical Committee on Control via Communication Networks (TC3-3). The workshop has served as an international forum for interaction among engineers, scientists, and practitioners of control engineering who are interested in adopting and promoting internet-based methodologies for teaching control engineering. About 50 papers have been presented and the main topics addressed were virtual and remote labs, interactive tools, problem-based learning and internet-based control education assessment, and web-based educational environments. In general, there was a clear recognition that internet-based teaching methodologies can significantly enhance the learning of the students but they should be put in the right context in order to be fully appreciated. It is also clear that sharing the resources can greatly simplify the work of the teacher.

Some of the attendees of the IFAC Workshop on Internet Based Control Education at the Mille Miglia museum in Brescia.

Article provided by
Sebastián Dormido sdormido@dia.uned.es Chair of the Technical Committee TC 9.4
J. Anthony Rossiter j.a.rossiter@sheffield.ac.uk vice-chair of Technical Committee TC 9.4
Bryn Ll Jones b.l.jones@sheffield.ac.uk
Antonio Visioli antonio.visioli@unibs.it Chair of IBCE 2015


The idea of autonomous cars has been in the air as early as the 1920s, but the first prototypes of truly autonomous (albeit limited in performance) road vehicles appeared only in the 1980s. Since then, several companies, e.g.,Mercedes, Nissan and Tesla, as well as many universities and research centres all over the world, have pursued the dream of self-driving cars. More recently, a few ad-hoc competitions and the increasing interest of some big tech companies have rapidly accelerated the research in this area and helped the development of advanced sensors and algorithms.
As an example, consider that Google maintains (and publishes) monthly reports including the experimental tests and the most recent advances on its driverless car.

The reasons why such a technology is not yet on the market are many and varied. From a scientific point of view, autonomous road vehicles pose two major challenges:

• a communication challenge: how to interact with the surrounding environment, by taking all safety, technological and legal constraints into account?
• a vehicle dynamics challenge: the car must be able to follow a prescribed trajectory in any road condition. On the one hand, the interaction with the environment mainly concerns sensing, self-adaptation to time-varying conditions and information exchange with other vehicles to optimize some utility functions (the so-called “internet of vehicles” – IoV).

These issues undoubtedly represent novel problems for the scientific community and have been extensively treated in the past few years. On the other hand, control of vehicle dynamics may seem a less innovative challenge, since electronic devices like ESP or ABS are already ubiquitous in cars.

Within this framework, robust control, namely the science of designing feedback controllers by taking also a measure of the uncertainty into account, has played a central role. However, by taking a deeper look at the problem, it becomes evident that the main vehicle dynamics issues for autonomous cars are more complex than those concerning human-driven cars and the standard approaches may be no longer effective.

Actually, path planning and tracking is a widely studied topic in robotics, aerospace and other mechatronics applications, but it is certainly novel for road vehicles. In fact, in existing cars, even the most cutting-edge technology devices are dedicated to adjust vehicle speed or acceleration in order to increase safety and performance, whereas the trajectory tracking task is always fully left to the driver (except for few examples, like automatic parking systems).

Nonetheless, most of vehicle dynamics problems arise from the fact that the highly nonlinear road- tire characteristics is unknown and unmeasurable with the existing (cost-affordable) sensors. Therefore, keeping the driver inside an outer (path tracking) control loop represents a great advantage in that she/he can manually handle the vehicle in critical conditions (at least to a certain extent) and make the overall system robust to road and tire changes. This is obviously not the case for autonomous vehicles.

Hence, it seems that standard robust control for braking, traction or steering dynamics could turn out to be “not robust enough” for path tracking in autonomous vehicles, because one can no longer rely upon the intrinsic level of robustness provided by the driver feedback loop. In city and highway driving, this fact may not represent a problem, because the sideslip angles characterizing the majority of manoeuvres are low and easily controllable [8]. However, in the remaining cases (e.g., during sudden manoeuvres for obstacle avoidance), a good robust controller for path tracking, exploiting the most recent developments in the field, could really be decisive to save human lives in road accidents.

It can be concluded that still a few important questions need an answer by robust control people, e.g.:

• “can we provide a sufficient level of robustness with respect to all roads and tire conditions, without decreasing too much the performance?”
• “are we able to replicate the response of an expert driver to a sudden change of external conditions?”
• “how can we exploit at best the information coming from the additional sensors usually not available on-board (e.g., cameras, sonars…)?”

but also many others.

IEEE experts estimate that up to 75% of all vehicles will be autonomous by 2040. This scenario will be accompanied by significant cost savings associated with human lives, time and energy. As control scientists and engineers, it really seems we can play a leading role towards this important social and economic leap.

Article provided by
Simone Formentin, PhD, Assistant Professor
IFAC Technical Committee 2.5: Robust Control


The Robot based Autonomous Refuse handling (ROAR) project is a first attempt to demonstrate such an autonomous combination. An operator driven refuse collection truck is equipped with autonomous support devices to fetch, empty, and put back refuse bins in a predefined area.

The physical demonstrator in the ROAR project constitutes one truck and four support devices. When the truck has stopped in an area, a camera-equipped quadcopter is launched from the truck roof to search for bins and store their positions in the system. As bin positions become available in the system, an autonomously moving robot is sent out from the truck to fetch the first bin. The system’s path planner calculates the path to the bin as an array of waypoints. The planner calculates paths based on a pre-existing map of the area. Upon following the waypoints, the robot is intelligent enough to avoid obstacles that are not on the map. To accomplish this detection, the robot is equipped with a LiDAR and ultrasonic sensors.

After reaching the last waypoint, the robot changes from navigation to pick-up mode. By exploiting the LiDAR and a front facing camera, the exact position and orientation of the bin can be detected. The robot aligns itself so that the bin can be picked up.
After the pick-up, the planner provides the robot with a new path back to the truck. After the last waypoint, the robot aligns with the lift at the rear of the truck. The lift is set at a pre-defined angle, so that the robot can move up to the lift and hook the bin onto it. During the emptying of the bin, the lift system monitors the area around the lift with a camera to assure that no person is in the way for the lift. If so, the lift movement is paused until the area is clear.

An emptied bin is picked up by the robot and returned to its initial position, once again based on a path from the planner. When reaching the initial bin position, the bin is put down. The robot can thereafter move to the next bin to be emptied, and the emptying procedure is repeated.

When there are no more bins to empty, the robot moves back to the truck and aligns itself with the lift. Similar to a bin, the robot is hooked on to the lift and the overall procedure is completed. The truck can thus be started and be driven to the next area.
The coordination of the truck and the support devices is based on a discrete event system model. This model abstracts the overall emptying procedure into a finite number of states and transitions. The states capture distinguishable aspects of the system, such as for example the positions of the devices and empty/full states of the bins. The transitions model start and completion of the various operations that the devices can perform. All steps in the above description of the emptying procedure can be modeled by such operations.

The investment in the discrete event model carries a number of attractive properties. During the development phase, the model can be derived using formal methods. Verification as well as synthesis (iterative verification) is then employed to refine an initial model to satisfy specifications on the system.

Moreover, the development of the actual execution of an operation can be separated from the coordination of the operation. As an example, consider the operation modeling that the robot navigates along a path. From an execution point of view, the operation must assure that given a path the robot eventually ends up at the last waypoint without colliding with any obstacle. From a coordination point of view, the operation must only be enabled when there is a path present in the system and the robot is positioned close to the initial waypoint.

The model contains two types of operations; operations that model the nominal behavior, and operations that model foreseen non-nominal behavior. The recovery operations in the second group can for example describe what the system can do when the robot cannot find a bin at the end of a path, or how to re-hook an incorrectly placed bin on the truck lift.

The discrete event model can also be exploited to handle more severe recovery situations, after unforeseen errors. As part of the development, the restart states in the system are calculated from the model. Upon recovery to simplify the resynchronization between the control system and the physical system, the operator sets the active state of the control system to such a restart state and modifies the physical system accordingly. By recovering from a restart state, it is guaranteed that the system can eventually finish an ongoing emptying procedure.

The truck and the support devices are connected using the Robot Operating System (ROS). ROS is an operating system-like robotics middleware that among other things enables message-passing between components defined in the system. Two types of messages are used in the ROAR project. The first type is messages related to starting and completion of operations. An operation start message is triggered from a user interface and is translated into a method call in the support device executing that operation. Under nominal conditions, this support device will eventually respond with a message saying that the operation has been completed. Both messages will update the current active state of the control system.

The second type is messages related to transferring data. Data transfer can be both internally within the programs connected to a support device and externally between support devices. An example of external data transfer is a path that is created in the path planner and then transferred to the robot.

During execution, the discrete event model is hosted on a web server. Interaction with the model is facilitated by the server’s API. Operator interaction is accomplished through a web based user interface. By enabling a web-based interface an operator can access the model using any device connected to the system’s network. This can for example be a computer in the truck cabin or a touchpad strapped to the operator’s forearm.

At the other end, ROS is also connected to the API. As pointed out before, this connection enables that operations started by the operator through the user interface are translated into method calls in the appropriate support device. Completion of the operation execution is translated into a post-request in the API. This will update the discrete event model to capture that the operation has been completed.
The physical demonstrator in the ROAR project is limited to a single robot for the bin handling. A next step could be to include more bin handling robots. For the specific field of application with refuse handling, more bin handling robots could enable higher efficiency in the emptying procedure. Many robots might also permit that the noisy truck can be parked further away from the bins, and thus cause less disturbance where people live. Today this is to be avoided because a distant truck will force the operator to walk too long.

From a more general point of view, coordination of multiple autonomous devices is an open research question. The two extremes are that the coordination is either performed from one central unit to which all devices are connected, or that the devices are intelligent enough to solve the coordination internally among them in a distributed manner. The two major coordinating challenges to handle is distribution of tasks between the devices and distribution of space where the devices can operate. The overall goal is thus so accomplish all tasks in some optimal way assuring that no devices are physically blocked in the operating area.

The productification of this overall control and coordination between one truck and several autonomous support devices is an interesting challenge. Imagine a future scenario where a haulage contractor company orders a new system. The truck is perhaps ordered from company A, with heavy-duty equipment from company B. The equipment is complemented with support devices from company C and company D. To operate properly, the system should also use services from the cloud, provided by some companies E and F. To further add to the equation, it is likely that operators are also in the loop to cope with unforeseen situations, complex item handling and parts of the decision making.

All in all, this text has only cracked open the door for what will come after the autonomous driving of passenger cars that we see today. There are still many mountains to climb and standards to agree upon before other areas than “just” the driving becomes automated. The outcome of the ROAR project is thus only a small step on a long journey a head.

The ROAR project is initiated and lead by Volvo Group. Chalmers University of Technology, Mälardalen University. Pennsylvania State University take part in the project as being Preferred Academic Partners to Volvo Group. The intention from Volvo Group is that students through bachelor and master theses should perform most of the development.

Article provided by
Patrik Bergagård, PhD, ROAR Project Leader
Martin Fabian, Professor, Automation
IFAC Technical Committee 1.3: Discrete Event and Hybrid Systems


This contribution deals with the present categories of automation of industry production and mobile communication, their contents as well as their interrelations and meaning for the industrial wireless communication.

Present categories in the area of wireless automation, which deserve a reflection, are “Industrial Internet/Industry 4.0“ and “5G Generation Mobile Networks“.

In formulating of a new future-oriented goal, a category that summarizes this goal, makes it identifiable and addressable for activities, is required. Regularly, such categories are being used in contemporary context to demonstrate that activities, products and services are corresponding to this future-oriented goal, independently of the extent of achievement of this goal. If it is about future-oriented goals a contemporary goal attainment is hardly realistic. High expectations will be raised but at short notice they cannot be met fully. The inflationary reference to the category covers the aim and results into rejection of the category itself. The duty of the expert committees shall be to make awareness of the future-oriented goals, to give orientation and to evaluate the steps of technical developments.

Industry 4.0
Initially, the project Industry 4.0 is a political initiative of the German Federal Government and part of the “New High-Tech Strategy – Innovations for Germany“. With this project, the industry shall be supported in the active contribution for a change of the industry production. However, the national initiative does not mean that there is only a national goal. Furthermore, the same goals in international competition in different categories e.g. Advanced Manufacturing or as part of Internet of Things (IoT) are aspired.

The focus is on a new organization and control of the value chain of the life cycle of products. This cycle shall be oriented on individual customer wishes. This includes a continuous information management from the idea of a product, over the development, production and distribution to the final customer up to the recycling including the related services. “Basis for this is the availability of all relevant information in real time through connection of all instances that are involved in the adding value as well as the possibility to extract the optimal added value chain of the data at any time”. The optimal processing of information needs an as good as possible digital reflection of the added value chain, their so called virtualization.

Undoubtedly, the communication plays a central role for the availability of all relevant information in real time. Furthermore, it is indisputable that the mobility of the objects involved in the production as well as the necessary flexibility of the production require wireless communication systems. However, subject of a current discussion is if the available wireless communication technologies fulfil the requirements of an „Industrial Internet” respectively which characteristics have to be aspired. In Germany the special working group AK-STD-1941.0.2 “Radio standardization and Industry 4.0” of the German Commission for Electrical, Electronic & Information Technologies of DIN and VDE (DKE) is working on this question. Experts of the fields Mechanical Engineering, Electrical Industry and the Digital Economy exchange information with focus on:

• relevant use cases for wireless communication in industry production,
• reference models for wireless communication,
• activities in standardization and specification and
• research activities.

The goal is to develop contributions for the Standardization Roadmap Industry 4.0. Two essential aspects have become apparent:

1. Even though, there are a lot of useful industrial wireless communication applications, additional more efficient wireless communication technologies will be necessary for the aspired change.

2. The increasing number of required radio connections for the Industrial Internet demands new concepts and solutions for an application-oriented and efficient usage of the wireless media.

These aspects address technical and political issues. This is highlighted by the recent project “ICT 2020 – Reliable wireless communication in industry“, funded by the German Federal Ministry of Research. A total of eight research projects address the aspects mentioned above from different perspectives. The coexistent coordinating research project deals with superior scientific questions of reliable wireless communication as well as with the coordination of the processing of the project.

5G
The category “5G” means “5th Generation Mobile Networks“ or “5th Generation Wireless Systems“. First international research projects regarding basis technologies and concepts are completed. In parallel to a second research initiative the standardization process shall be started at the end of 2015. The goal for this new generation arranges with the tradition of the development of telecommunication. It is about significant improvements of performance parameters, such as:

• 100 times higher data rate as present LTE networks (up to 10,000 Mbit/s),
• about 1000 times higher capacity,
• worldwide 100 billion mobile phones can be addressed simultaneously,
• extremely low latency periods, Ping less than 1 millisecond,
• 1/1000 energy consumption per transferred bit,
• 90 % lower power consumption per mobile service.

This development alone is no sufficient reason to pursue 5G in relation to Industrial Internet, especially, because there are restrictions of the automation industry against a scientific and technical dependency of mobile providers that shall not be neglected. However, it is remarkable that with 5G numerous application areas, so called verticals that exceed the classical telecommunication are in focus. One of these application areas is called Massive Machine Type Communications (MMTC) and addresses the industrial wireless communication. But, there are still many unanswered questions. For instance, do the development goals of these new technologies consider the requirements of Industrial Internet? Currently, the telecommunication community specifies the use cases as well as the requirement profiles. Who from the areas of machinery and plant engineering as well as electrical engineering accompanies the development in this application area? Will be there a complete integration of the wireless communication technologies into the concepts of Industrial Internet? This affects both, the consistent communication concept from sensor to command and control level and the illustration as digital representation for virtual production. After all, the communication is not only a means to an end but also an object of an industry plant. Planning, implementing and operation are not independently from the production plant. Who is dedicated to the device and system description? Who is responsible for engineering, guarantee of availability of the communication according to the production target? The application plays a different role than the traditional user of telecommunication. So, the question how the accomplishments of the user requirements can be guaranteed arises. This is the responsibility of the device and system manufacturer as well of the operator of the communication systems. But also requirements and conditions of the production plant are important. These interdependences and interrelations require an across sectoral standardization. The standard IEC 62657 (IEC 62657-1 Industrial communication networks – Wireless communication networks) may be the initial basis, which describes the requirements and conditions of industrial automation of the wireless communication. Furthermore, it is important to define the interfaces to radio standardization.

One important question to clarify: Is the client willing to pay for the availability of the information exchange?

With focus on the contents of the categories Industrial Internet and 5G it can be determined that the new mobile communication has great potential for an all-embracing provision of production relevant information as it is planned for the change of the industry production. It is, though, of essential importance to overcome the barriers between telecommunication and industry automation. This concerns the industry boundaries as well as the interfaces of the technical implements and their standardization. In first instance, a common language in literal sense has to be found. Based on this, the concepts that enable the integration of telecommunication into the industry automation have to be matched. Current research projects and new tenders offer the possibility for this. Moreover, the communication in expert committees and standardization bodies shall be used to make new telecommunication concepts usable for the change of the industry production. The success for the economy depends on their engagement regarding the global editing of the open question as well as their usage of the created political framework conditions.

Then, wireless communication will provide the potential to influence the automation concepts. Categories shall be used for technical orientation and less as a marketing instrument.

Article provided by
Ulrich Jumar
Lutz Rauchhaupt , Institute of Automation and Communication e.V. at the Otto-von-Guericke-University Magdeburg, Germany
IFAC Technical Committee 3.3 Telematics: Control via Communication Networks


The motivation for this issue comes from a real need to have an open discussion about the challenges of control for very demanding systems, such as wind turbine installations, requiring the so-called “sustainability” features. It represents the characteristic to tolerate possible malfunctions affecting the system and, at the same time, the capability to continue working while maintaining power conversion efficiency. Sustainable control has  begun to stimulate research and development in a wide range of industrial communities particularly for those systems demanding a high degree of reliability and availability. The system should be able to maintain specified operable and committable conditions, and at the same time should avoid expensive maintenance works. For offshore wind farms a clear conflict exists between ensuring a high degree of availability and reducing costly maintenance.

Renewable energy can be produced from a wide variety of sources including wind, solar, hydro, tidal, geothermal, and biomass. By using renewables in a more efficient way to meet its energy needs, the EU lowers its dependence on imported fossil fuels and makes its energy production more sustainable and effective. The renewable energy industry also drives technological innovation and employment across Europe, as highlighted for the wind power conversion systems.

2020 renewable energy targets are settled. The EU’s Renewable Energy Directive sets a binding target of 20% final energy consumption from renewable sources by 2020. To achieve this, EU countries have committed to reaching their own national renewables targets ranging from 10% in Malta to 49% in Sweden. They are also each required to have at least 10% of their transport fuels come from renewable sources by 2020 [1]. All EU countries have adopted national renewable energy action plans showing what actions they intend to take to meet their renewables targets. These plans include sectorial targets for electricity, heating and cooling, and transport; planned policy measures; the different mix of renewables technologies they expect to employ; and the planned use of cooperation mechanisms.

A new target for 2030 is fixed. Renewables will continue to play a key role in helping the EU meet its energy needs beyond 2020. EU countries have already agreed on a new renewable energy target of at least 27% of final energy consumption in the EU as a whole by 2030. This target is part of the EU’s energy and climate goals.

Support schemes for renewables are available, which drive the technological innovation and employment in this framework. Horizon 2020 is the biggest EU Research and Innovation programme ever with nearly €80 billion of funding available over 7 years (2014 to 2020) – in addition to the private investment that this money will attract. It promises more breakthroughs, discoveries and world-firsts by taking great ideas from the lab to the market. Horizon 2020 is the financial instrument implementing the Innovation Union, a Europe 2020 flagship initiative aimed at securing Europe’s global competitiveness [1].

By coupling research and innovation, Horizon 2020 is helping to achieve this with its emphasis on excellent science, industrial leadership and tackling societal challenges. The goal is to ensure Europe produces world-class science, removes barriers to innovation and makes it easier for the public and private sectors to work together in delivering innovation.

Wind energy is perhaps the most advanced of the ‘new’ renewable energy technologies, but there is still much work to be done. Assessments of the research and technology developments and impacts have been highlighted by recent proposals within the Horizon 2020, with key benefits from both the scientific and industrial points of view.

Wind energy can be considered as a fast–developing multidisciplinary field consisting of several branches of engineering sciences. The National Renewable Energy Laboratory estimated a growth rate of the wind energy installed capacity of about 30% from 2001 to 2006, and even with a faster rate up to 2014.

The global wind power installations are 369,6 GW in 2014, with an expected growth to 415.7 GW by the end of 2015. After 2009, more than 50% of new wind power resources were increased outside of the original markets of Europe and U.S., mainly motivated by the market growth in China, which now has 101,424 MW of wind power installed. Several other countries have obtained quite high levels of stationary wind power production, with rates from 9% to 39%, such as in Denmark, Portugal, Spain, France, Ireland, Germany, Ireland, and Sweden in 2015. From 2009, 83 countries around the world are exploiting wind energy on a commercial basis, as wind power is considered as a renewable, sustainable and green solution for energy harvesting. Note however that, even if the U.S. now achieves less than 2% of its required electrical energy from wind, the most recent National Renewable Energy Laboratory’s report states that the U.S. will increase it up to 30% by the year 2030. Note also that, even if the fast growth of the wind turbine installed capacity of wind turbines in recent years, multidisciplinary engineering and science challenges still exist. Moreover, wind turbine installations must guarantee both power capture and economical advantages, thus motivating the wind turbine dramatic growth [1].

Industrial wind turbines have large rotors and flexible load–carrying structures that operate in uncertain, noisy and harsh environments, thus motivating challenging cases for advanced control solutions [2].  Advanced controllers can be able to achieve the required goal of decreasing the wind energy cost by increasing the capture efficacy; at the same time they should reduce the structural loads, thus increasing the lifetimes of the components and turbine structures.

Although wind turbines can be developed in both vertical–axis and horizontal–axis configurations, the industrial and technological interest focusses on horizontal–axis wind turbines, which represent the most commonly solutions today in the produced large–scale installations. Horizontal–axis wind turbines have the advantage that the rotor is placed atop a tall tower, with the advantage of larger wind speeds that the ground. Moreover, they can include pitchable blades (i.e. they can be oriented with respect to the wind direction) in order to improve the power capture, the structural performance, and the overall system stability. On the other hand, vertical–axis wind turbines are more common for smaller installations. Note that proper wind turbine models are usually oriented to the design of suitable control strategies that are more effective for large rotor wind turbines. Therefore, the most recent research focus considers wind turbines with capacities of more than 10 MW [3].

Another important issue derives from the increasing complexity of wind turbines, which gives rise to more strict requirements in terms of safety, reliability and availability. In fact, due to the increased system complexity and redundancy, large wind turbines are prone to unexpected malfunctions or alterations of the nominal working conditions. Many of these anomalies, even if not critical, often lead to turbine shutdowns, again for safety reasons. Especially in offshore wind turbines, this may result in a substantially reduced availability, because rough weather conditions may prevent the prompt replacement of the damaged system parts. The need for reliability and availability that guarantees the continuous energy production thus requires sustainable control solutions [2].

These schemes should be able to keep the turbine in operation in the presence of anomalous situations, perhaps with reduced performance, while managing the maintenance operations. Apart from increasing availability and reducing turbine downtimes, sustainable control schemes might also obviate the need for more hardware redundancy, if virtual sensors could replace redundant hardware sensors. These schemes currently employed in wind turbines are typically on the level of the supervisory control, where commonly used strategies include sensor comparison, model comparison and thresholding tests. These strategies enable safe turbine operations, which involve shutdowns in case of critical situations, but they are not able to actively counteract anomalous working conditions. Therefore, recent research directions have been oriented to investigate these sustainable control strategies, which allow to obtain a system behaviour that is close to the nominal situation in presence of unpermitted deviations of any characteristic properties or system parameters from standard conditions (i.e. a fault). Moreover, these schemes should provide the reconstruction of the equivalent unknown input that represents the effect of a fault, thus achieving the so–called Fault Detection and Diagnosis tasks [3].

The need of advanced control solutions for these safety–critical and very demanding systems, motivated also the requirement of reliability, availability, and maintainability over the required power conversion efficiency. Therefore, these issues have begun to stimulate research and development of sustainable control (i.e. fault–tolerant control), in particular for wind turbine applications. Particularly important for offshore installations, Operation and Maintenance (O & M) services have to be minimised, since they represent one of the main factors of the energy cost. The capital cost, as well as the wind turbine foundation and installation determine the basic term in the cost of the produced energy, which constitute the energy “fixed cost”. The O & M represent a “variable cost” that can increase the energy cost up to about 30%. At the same time, industrial systems have become more complex and expensive, with less tolerance for performance degradation, productivity decrease and safety hazards. This leads also to an ever increasing requirement on reliability and safety of control systems subjected to process abnormalities and component faults [2, 3].

As a result, the Fault Detection and Diagnosis tasks, as well as the achievement of fault tolerant features for minimising possible performance degradation and avoiding dangerous situations are extremely important. With the advent of computerised control, communication networks and information techniques, it becomes possible to develop novel real–time monitoring and fault–tolerant design techniques for industrial processes, but this also brings challenges. Several works have been proposed recently on wind turbine Fault Detection and Diagnosis, and the sustainable (fault tolerant) control problem has been recently considered with reference to offshore wind turbine benchmarks, which motivated this issue [3, 4].

In this way, by enabling this clean renewable energy source to provide and reliably meet the world’s electricity needs, the tremendous challenge of solving the world’s energy requirements in the future will be finally enhanced. The wind resource available worldwide is large, and much of the world’s future electrical energy needs can be provided by wind energy alone if the technological barriers are overcome. The application of sustainable controls for wind energy systems is still in its infancy, and there are many fundamental and applied issues that can be addressed by the systems and control community to significantly improve the efficiency, operation, and lifetimes of wind turbines.

[1] Global Wind Energy Council. Wind Energy Statistics 2014. Report, 2014.

[2] Blanke, M.; Kinnaert, M.; Lunze, J.; Staroswiecki, M. Diagnosis and Fault–Tolerant Control; Springer–Verlag: Berlin, Germany, 2006.

[3] Simani S. Overview of Modelling and Advanced Control Strategies for Wind Turbine Systems. Energies, October 2015, 10, 12116–12141. (This article belongs to the Special Issue Wind Turbine 2015)

[4] Odgaard, P.F.; Stoustrup, J. A Benchmark Evaluation of Fault Tolerant Wind Turbine Control Concepts. IEEE Transactions on Control Systems Technology 2015, 23, 1221–1228.

Article provided by
Dr. Silvio Simani
University of Ferrara
IFAC Technical Committee 6.4 – Safeprocess

Dear IFAC Social media followers, Dear Friends and Colleagues,

Let me take this opportunity to wish you all a Healthy and Prosperous New Year 2016.

IFAC has launched its social media channels in September 2015 as a platform to leverage brand awareness amongst its internal and external stakeholders. It is great to count you among IFAC social media followers and I wish to express my sincere thanks for your continued support and your important contribution to the activities of IFAC

I would like to express my heartfelt thanks to Jakob Stoustrup for the enormous energies they have put into launching and managing the social media project. We are also thankful to our partner Sven Uhlig, from StudioUHU, for assisting us in the development and the implementation of the IFAC social media platform.

We look forward to a successful growth of the IFAC social media platform and hope it will become a useful source of information exhange for the IFAC community. You are more than welcome to provide us with feedback and suggestions on how to improve IFAC activities and services.

With best wishes,

Janan Zaytoon
IFAC President

“Control is everywhere”. This sentence has been used to promote the importance of Systems Engineering in high-level studies as well as being a propellant of market diversification of specialized equipment manufacturers. There is no doubt that this is true. Control is an essential piece of most aspects of our lives, but the original dream goes beyond.

Visionaries, scientists and writers, foresee a world strongly instrumented with sensors, actuators and computing resources, populated with software entities capable of anticipating the actions leading to a better performance of real-world entities. This is a very complex scenario proper of Artificial Intelligence research involving low-level, entity-level, global and ethical requirements and objectives. This also involves a global tradeoff among systems having a strongly non-linear behavior. Realizing this vision constitutes a major challenge.

As in the first large revolution of Systems Engineering that moved it from the analog to the digital scenario, we face now a new revolution also related to the coupling of Control and Computing. Some of the computing technologies involved in this qualitative change are:

• Pervasive communication and computing
• Cloud computing and services
• BigData
• Deep learning
• Agents and assistants technology

Some years ago, the experience of moving from the context of “Wireless Sensor Networks” to the “Wireless Sensor and Actuators Networks” has shown us that closing control loops in a complex and distributed computing environment unveils new challenges. While communication delays of sensor information can be measured, and sometimes compensated into the control software, acting information traveling in a communication infrastructure is less robust. This communication must be done in the context of a contract of Quality of Service. To ensure a reliable performance, local actuators (“local” means “wired to the analog component”) must have computing capabilities to deal with this “contract” and algorithms to manage contract violations and all other exceptional situations.

In a similar way, looking at a big scale Pervasive Control reveals the importance of aspects of the design related to the reliability and confidence as well as rethinking the role of the human component. As a previous step to face the Pervasive Control dream, there must be solid developments on at least the following technologies:

• Mixed Criticality management and execution platforms.
• Cyber-security
• Performance related to resources availability. Graceful degradation.
• Stability guarantee.

The capabilities of computer and control technologies are now stronger like never before. Nevertheless it is not as simple as “putting it together”, there is a lot of work to do in the gap. Surely this is a source of opportunities for research and business.

Article provided by
José E. Simo Ten (LinkedIn profile)
Universitat Politécnica de Valéncia, Valencia, Spain
IFAC Technical Committee  3.1. Computers for Control

They are everywhere. Some 100 trillion inhabit the earth, comprising half of the animal mass on it. Have you guessed what I am talking about yet? See the following articles in the New York Times, NY Times Magazine, Scientific American, Nature, Science, or this TED talk to refresh your memory. Now the human microbiome has been associated with almost every disease possible, microbes in the gut have even been associated with brain diseases [1]. The study of these little things is kind of a big deal.

What is a normal human microbiome?
The most important developments in the human microbiome have come via the analysis of large cohorts across body sites (gut, mouth, vagina, skin, etc) [2] and longitudinal studies where fecal samples have been collected on a daily scale [3,4]. What we know from these studies is that the abundance and kinds of microbes are body site specific. See the figure just below that illustrates this point [2, Figure 1c].

In the figure above 4,788 specimens from 242 adults are projected into the first two principle coordinates (relative abundance of microbes at genus level). The different body sites are color coded, and it is clear that the specimens cluster according to body site and not by subject. We have also learned that microbial abundances are fairly stable for each site and for each subject (I will discuss this in more detail shortly). Before getting to the dynamics and estimation part we need a story so as to understand the translational implications of a better understanding of the human microbiome.

Fecal Microbial Transplantation
This story begins with Jane coming to the hospital because of an infection in her leg. To kill the infection she is given broad spectrum antibiotics. After a few days the infection is gone, but Jane now has severe diarrhea. The antibiotics have killed some of the healthy bacteria in her gut and now Jane has an over abundance of Clostridium difficile, i.e. she has Clostridium Difficile Infection (CDI). Ironically the most often prescribed treatment for CDI is another antibiotic. This targeted antibiotic always works in temporarily reducing the abundance C. difficile, but the CDI is recurrent. So with no other options Jane asks her brother John for a fecal sample. This fecal sample is prepared and transplanted into Jane (Fecal Microbial Transplantation (FMT)). As if a miracle has occurred Jane is healthy again. This kind of story is becoming commonplace in hospitals around the country now.

What happens in terms of the abundance of the microbes post-FMT is quite amazing. Consider the figure below [5, Figure 1]. What is being illustrated is several subjects stool samples pre-FMT, circled in red, and the trajectories (post-FMT) are shown to rapidly converge to the green circle (which also contains the host sample), overlaid on top of the samples from the 242 healthy adults above. A movie of these trajectories can be downloaded here. While the post-FMT samples do deviate slightly from the host sample in terms of relative abundance of microbes the stool samples remain within the range of what is considered a healthy stool sample. Stated simply what we are observing is the patient’s gut microbiome reconstituted and remaining in an abundance profile similar to that of the donor. It is quite amazing.

Is the microbiome stable?
What we just saw above was that the post FMT stool samples remained similar to the host after transplantation. So then one natural question arises: How stable is the human microbiome? Even biologist recognize that this is an important subject as is evidenced by the figure below which appeared in a recent review article in Science [6, Figure 1]

While I am delighted to see that the notion of stability has been recognized as an important issue in the human microbiome, I have felt a push back from the microbiome research community to explore what this actually implies. There is also a misunderstanding of what the word stable means. This is simply an ignorance issue and as control engineers/theorist we should just simply educate those in this field. Consider the figure below [7, Figure 3A] that shows 15 days of samples (shown in yellow) taken from the daily 1 year gut microbiome study in [3], and projected onto the principle coordinates from a previous study [8,9]. Ignore the red, green, and blue dots and focus on the trajectories of the yellow dots with gray lines following the day to day changes in the stool samples. The authors of [7] wanted to highlight the fact that the samples can deviate from steady state in almost all directions. The authors unfortunately draw the conclusion that this is a visualization of instability in the gut microbiome. The original figure from the study in [7] is shown on the left and my annotated figure is on the right. I would like to illustrate that the two trajectories after deviation return to the “steady region’’. This is not instability, but the very definition of stability. One could even argue we are observing asymptotic like stability, i.e. in the absence of disturbances all trajectories converge to a single fixed point. Of course there will always be disturbances in biological systems. Could this line of reasoning help to explain the success of FMT? I think you can begin to see where those working in the area of dynamics and control might be needed in this emerging field.

How do we model the microbiome?
The most common way that microbes interact is through the consumption of nutrients and the synthesis of products (not necessarily through the direct consumption of each other) [10]. Therefore, a detailed model would contain states for both the abundance of microbes and the abundance of the metabolites they consume and synthesize. At the finest level of modelling all host and microbe metabolic pathways would need to be mapped. We currently do not posses the technology or sufficiently rich data to perform this rigorously. At this point in our understanding of microbial dynamics it is more common to think of a reduced order model that only accounts for the abundances of the microbes.

The two most popular (reduced order) models are Generalized Lotka-Volterra (GLV) dynamics over a network and Bayesian networks. The first is deterministic (most common as well) and the second, probabilistic. I will focus on the first one here, but a similar discussion could follow with a probabilistic mind set as well, just a lot more capital E’s.

Let $$x_i$$ be the abundance of microbe $$i$$ for subject $$X$$ at a specific location on/in the body. Let’s assume for now we are concerned only with the gut. Then the GLV model for $$n$$ microbes interacting in the gut of subject $$X$$ is described by the following differential equation

$\dot{x}_i=r_ix_i+\sum_{j=1}^na_{ij}x_j$ where $$i=1,2,…,n$$. Collecting the abundances of the microbes into a column vector $$x=[x_1,\ x_2,\ \ldots ,\ x_n]^T$$ the dynamics can be compactly written as $$\dot{x}=\text{diag}(r)x+\text{diag}(x)Ax,$$
where $$r$$ is a column vector of the $$r_i$$ and $$[A]_{ij}:=a_{ij}$$. We will refer to $$A$$ as the microbial interaction matrix, or network. In this modeling paradigm $$r$$ captures the linear growth or death terms and the matrix $$A$$ captures the causal interactions amongst species. Thus the entry $$a_{ij}$$ is determined by thinking of the average affect that species $$j$$ has on species $$i$$ by determining what species $$j$$ generates as products and what both species $$i$$ and $$j$$ consume as nutrients. For instance, if species $$j$$ produces products that species $$i$$ consumes as nutrients and they do not compete for any other nutrients then the entry of $$a_{ij}$$ would be positive. I mentioned earlier that we dot not fully understand the microbial and metabolic interactions well enough to have a global bottom up model. Do we have sufficient data to learn the interactions in the simplified GLV model? We will discuss this in more detail shortly. Note in this blog the term “species’’ is in the general context of ecology, i.e. a set of organisms adapted to a particular set of resources in the environment, unless we state that we are specifically discussing the taxonomic rank “species’’.

Lets now consider the gut of a different individual, subject $$Y$$, and assume that the dynamics are as follows

$\dot y = \text{diag}(\bar r) y + diag(y) \bar A y.$
Notice that I have written the dynamics for both subjects with different variables, $$(r,A)$$ for subject $$X$$ and $$(\bar{r},\bar{A})$$ for subject $$Y$$. Is it possible that for two otherwise healthy individuals with similar diet $$A=\bar{A}$$, and $$r=\bar{r}$$. Recent attempts to infer the interaction matrices for two individuals illustrates some short comings in the literature and another opportunity for those working in system identification and machine learning to have an immediate impact in this field.

Consider the networks just below illustrating a subset of the interaction matrix for two subjects gut microbes [11, Figure 6]. This study concludes that the network of causal interactions between microbes are not the same for healthy individuals. There are many issues with this study however, illustrating the need for those working in the area of system identification to collaborate with those working on the human microbiome. I do not want to disparage the authors of [11], my only intention here is point out mistakes in the analysis that a control engineer might have noticed.

The authors correctly recognize that the data was not sufficiently rich (1 year of daily samples with not very much excitation) to accurately capture all species interactions (on the order of 100 at the taxonomic rank of species). Thus, the authors concluded to perform system identification using only the 14 most abundant species, and then showed that the two networks are different. One will realize however that the throwing away of states is problematic, and not the appropriate way to overcome a lack of sufficient richness. My own work is in fact pointing to the opposite scenario, otherwise healthy individuals have the same underlying interaction network, but I will withhold claiming that until I have more proof.

Our help is also needed in helping the physicians design their trials so that samples can be obtained with as much information as possible. There is also the technical issue of microbial samples usually being normalized (we only know relative abundances with confidence). System identification in biological networks is often referred to as network reconstruction, and this entire sub area of biological research is in very serious need of our help as this scathing comment in nature biotechnology points out.

Lots of open questions

• Are some body sites more stable than others?
• How do we rigorously demonstrate this stability?
• Are the networks of two healthy individuals similar?
• How do different diseases affect that network?
• Why do FMTs work?
• Are there other modelling approaches that can be used to understand microbial dynamics?
• What are the fundamental limitations for network reconstruction when dealing with relative abundances?
• Finally, how do we control the microbiome?

Conclusions
Aircraft control has been one of the cornerstone applications for control for more than 50 years. It is time however to find new areas for research. I hope this has inspired you to consider some translational areas such as the human microbiome as a possible research area for the application of everything you have learned in dynamics, control, and system identification.

I would like to acknowledge my collaborators Yang-Yu Liu and Amir Bashan, as well as conversations I have had with Aimee Milliken, Eric Alm, Curtis Huttenhower, and Rob Knight.

Biblipgraphy

1. Mayer, Emeran A., et al. “Gut microbes and the brain: paradigm shift in neuroscience.” The Journal of Neuroscience 34.46 (2014): 15490-15496.
2. Human Microbiome Project Consortium. “Structure, function and diversity of the healthy human microbiome.” Nature 486.7402 (2012): 207-214.
3. Caporaso, J. Gregory, et al. “Moving pictures of the human microbiome.”Genome Biol 12.5 (2011): R50.
4. David, Lawrence A., et al. “Host lifestyle affects human microbiota on daily timescales.” Genome Biol 15.7 (2014): R89.
5. Weingarden, Alexa, et al. “Dynamic changes in short-and long-term bacterial composition following fecal microbiota transplantation for recurrent Clostridium difficile infection.” Microbiome 3.1 (2015): 10.
6. Costello, Elizabeth K., et al. “The application of ecological theory toward an understanding of the human microbiome.” Science 336.6086 (2012): 1255-1262.
7. Knights, Dan, et al. “Rethinking “enterotypes”.” Cell host & microbe 16.4 (2014): 433-437.
8. Arumugam, Manimozhiyan, et al. “Enterotypes of the human gut microbiome.”nature 473.7346 (2011): 174-180.
9. Arumugam, Manimozhiyan, et al. Addendum “Enterotypes of the human gut microbiome.”nature 506.7489 (2014): 516–516.
10. Levy, Roie, and Elhanan Borenstein. “Metabolic modeling of species interaction in the human microbiome elucidates community-level assembly rules.”Proceedings of the National Academy of Sciences 110.31 (2013): 12804-12809
11. Fisher, Charles K., and Pankaj Mehta. “Identifying Keystone Species in the Human Gut Microbiome from Metagenomic Timeseries Using Sparse Linear Regression.” PLoS ONE 9.7 (2014).
Article provided by
Travis E. Gibson (@gibsonnews)
Harvard Medical School, USA
IFAC Technical Committee 1.2. Adaptive and Learning Systems