IFAC blog page

Fault Detection, Supervision and Safety for Energy Conversion Systems: Wind Turbines and Hydroelectric Plants

The motivation for this article comes from a real need to have an open discussion about the challenges of fault detection and supervision for very demanding systems, such as energy conversion systems. These features represent the key characteristic to identify possible malfunctions affecting the system (i.e. the so-called faults) and, at the same time, the capability to continue working while maintaining power conversion efficiency, if proper countermeasures are adopted. Moreover, the safety issue has begun to stimulate research and development in a wide range of industrial communities particularly for those systems demanding a high degree of reliability and availability, such as wind turbines and hydroelectric plants. In fact, once the faults are promptly detected and compensated, the system will be able to maintain specified operable and committable conditions, and at the same time should avoid expensive maintenance works. For very large installations a clear conflict exists between ensuring a high degree of availability and reducing costly maintenance, thus justifying the solutions addressed in the proposed contribution.

With the continuing decrease in the stock of global fossil fuels, issues of security of supply, and pressure to honour greenhouse gas emission limits, much attention has turned to renewable energy sources to fulfil future increasing energy needs.  Wind energy, now a mature technology, has had considerable proliferation compared to other sources, such as biomass, solar, and hydraulic energy systems. Hydraulic power provides previously untapped energy potential, and hydroelectric systems can show some variability properties, especially when combined with wind energy.

One common misconception in effective renewable energy conversion system design is that converters must be optimally efficient. However, since the resource itself (wind and hydraulic power) is free, the main objective is to minimise the converted cost of the renewable energy i.e. the cost per kWh, taking into account the lifetime costs (capital, operational and commissioning/decommissioning costs) as well as energy receipts (value of energy sold). Nevertheless, for a given capital cost, maximising the energy receipts (assuming relative insensitivity of operational costs) is an important economic objective and control system technology has an important role to play in this regard. In an ideal world, one should consider the design of a complete system from the top down. However, the discipline-specific experts usually design physical systems and control engineers, working in collaboration with the discipline-specific experts, then address the control problem in a subsequent step. Such an approach, though prevalent in the bulk of industrial applications of control, is non-optimal, even if there are some notable exceptions.

Recent studies suggest a strong interaction between the fundamental design of renewable energy conversion machines and the systems used to achieve supervision and fault diagnosis tasks [1, 2]. In any case, given the relatively low cost of supervision systems (developed via computer algorithms and software) compared to the cost of the renewable energy converters, the recent focus of research has been on increasing the energy conversion capacity of a given wind turbine or hydroelectric system device. However, this relatively simple implementation modality masks both the capability of supervision systems and the high level of engineering underpinning the development of suitable fault diagnosis algorithms. For example, many high-performance model-based supervision and fault diagnosis methods require an accurate mathematical model of the system to be controlled and a significant number of man-hours can be absorbed in modelling. Nevertheless, there is usually a good case to be made for the incorporation of this technology to improve the performance (both technical and economic), reliability and safety of systems. By taking into account commonalities and contrasts in particular for wind turbines and hydroelectric systems, the role that computer science engineering can play in making energy conversion systems can be made more competitive and effective [1, 2].

There are a number of economic issues associated with the introduction of supervision and fault diagnosis systems for improving the safety and reliability of renewable energy devices that need to be considered. One important factor is that many wind turbine and hydroelectric devices are situated in relatively remote and/or inaccessible areas, with consequent implications for maintenance. As a result, the implemented supervision and fault diagnosis systems should be reliable and there is a need for safety features. In addition, any changes in the working conditions associated with energy conversion systems need to be considered and these may impact operational cost via additional maintenance requirements.

Both wind turbines and hydroelectric systems exhibit nonlinear behaviour and are required to operate over a wide range of excitations [1, 2]. These energy conversion systems also have particular physical constraints (displacements, velocities, accelerations and forces) that must be strictly observed if such systems are to operate effectively and have economically attractive useful operational lifetimes. The challenges for wind turbines and hydroelectric systems present common and different requirements related to renewable power conversion efficiency into electric energy. In general, within the issues considered here, power conversion is converting renewable sources to electric energy, also regulating the voltage and frequency. Therefore, a power converter is an electro-mechanical device for converting wind/hydraulic energy to electrical energy. The power converter includes electrical machinery that is used to convert and control both frequency and voltage.

With this view, commonalities and contrasts for wind and hydraulic energy systems are briefly outlined in the following. On one hand, even if hydraulic energy systems are well established and even more common than wind installations, the supervision and the fault diagnosis problems for wind turbines have received much more attention in the last decades [1, 2]. In addition, the fault tolerant control problem for wind turbines has been recently analysed [1]. In general, these supervision and fault diagnosis methods are classified into two types, i.e. passive and active schemes. Passive solutions are designed to be robust against a class of presumed faults. In contrast to them, active approaches react to the system component failures actively by proper reconfiguring actions so that the stability and acceptable performance of the entire system can be maintained. The main difference between active and passive schemes is that an active design relies on a fault diagnosis system, which provides information about the faults. In the considered case, the fault diagnosis system provides the estimation of the unknown input (fault) affecting the system under control. The knowledge of the fault allows the active supervision design system to reconfigure the current state of the system. On the other hand, the passive scheme does not rely on a fault diagnosis algorithm, but is designed to be robust towards any possible faults. This is accomplished by designing a supervision system that is optimised for the fault-free situation, while satisfying some graceful degradation requirements in the faulty cases. However, with respect to a robust design, the passive strategy provides reliable controllers that guarantee the same performance with no risk of false fault detection [1].

On the other hand, few works analysed the model-based design of supervision and fault diagnosis strategies when applied to hydroelectric plants, as considered e.g. in [2]. In fact, as a mathematical model is needed for the description of the system behaviour, precise modelling for these processes could be difficult to achieve in practice. There are several works that discuss the modelling of hydroelectric processes with their supervision and fault diagnosis design, as shown in [2]. These works consider the elastic water effects, though the nonlinear dynamics are linearised at an operating point. Other papers considered different mathematical descriptions with the techniques to control the power systems. Moreover, linear and nonlinear plants with various water column effects and supervision solutions are also considered.

Therefore, the focus should be on the identification of the aspects that might be common with a view to utilising some ideas, born in the wind turbine domain, within the other regarding hydroelectric plants. These issues have begun to stimulate research and development in the wider control community in each domain, and interesting results have been obtained. In particular, a proper mathematical description of these energy conversion systems should be able to capture the complete behaviour of the process under monitoring, thus providing an important impact on the design of the supervision and the fault detection systems themselves [1, 2].

Finally, it is worth noting that when the safety-critical level of the process under diagnosis is relatively high, the implementation of supervision and fault detection methodologies may be even cheaper and more reliable than the cheapest and simplest multiple redundant hardware sensor systems [1, 2].


[1] S. Simani and S. Farsoni, Fault Diagnosis and Sustainable Control of Wind Turbines: Robust data–driven and model–based strategies. Mechanical Engineering, Oxford (UK): Butterworth–Heinemann – Elsevier, 1st ed., Jan. 4th 2018. ISBN: 9780128129845.

[2] S. Simani, S. Alvisi, and M. Venturini, “Fault Tolerant Control of a Simulated Hydroelectric System,” Control Engineering Practice, vol. 51, pp. 13–25, June 2016. DOI: http://dx.doi.org/10.1016/j.conengprac.2016.03.010.

Article provided by:

Prof. Silvio Simani. Department of Engineering, University of Ferrara, Ferrara, Italy. Email: silvio.simani@unife.it Institute of Technology
IFAC TC 6.4 on Safe process


IFAC Industry Committee and Panel Sessions in Toulouse

Contributed by Silvia Mastellone, University of Northwestern Switzerland and Tariq Samad, University of Minnesota

Control technology has historically been a strong enabler for major technological achievements, from space exploration to nanotechnology. Traditionally the subject was driven from applications in various domains, electrical, mechanical and chemical. The development of the theory has been strongly linked to challenging problems present in various applications. When we look at control technology today we observe a substantial gap between the advances in control theory, typically achieved in academia, and the technology adopted in industry. Such a gap prevents control from being used to its full potential, and various industrial sectors from benefiting from new developments in control. Advanced control has the potential to push the boundary of the present technology in different industrial sectors. How can we transform this potential to the reality of the future?

A key enabler to this end is the fostering of collaboration between industry and academia; we can do more and we can do better if we join forces. As a result in 2014 IFAC formed a Pilot Industry Committee with the following goals:
• Strengthen the engagement of industry and industry representatives in IFAC activities;
• Enhance the value of IFAC to industry;
• Help control research realize its full potential for industry impact.

Among other activities over the past triennium, the committee has focused on analyzing the current situation in terms of control methodologies presently applied in industry and understanding what aspects limit the collaboration between researchers and practitioners. Work streams have been formed to address the challenges of deeper collaboration. Based on the outcome of the Pilot Industry Committee, at the IFAC 2017 World Congress in Toulouse, France, the General Assembly approved a constitutional amendment that established a permanent Industry Committee chaired by a new Technical Board Vice-Chair for Industry Activities, who will also be a nonvoting, ex officio member of the IFAC Council. The amendment states, “The objectives of the Industry Committee will include increasing industry participation in and impact from IFAC activities.”

During the 20th IFAC Word Congress the committee organized two panel sessions, the first one on enhancing academic/industrial collaboration and a second one on advanced control in industry more broadly. The discussions in these panel sessions are summarized below.

Readers of this post who are interested in actively participating in the work of the Industry Committee are encouraged to contact the author and/or the chair of the committee, Tariq Samad (tsamad@umn.edu).

Panel Session 1: How to Enhance Industry/University Collaboration on Advanced Control

Panelists: Dr. Kevin Brooks (BLUESP, South Africa), Dr. Alex van Delft (DSM, Europe), Prof Sebastian Engell (TU Dortmund, Germany), Prof Thomas Jones (S-PLANE Automation, South Africa), Dr. Michael Lees (Carlton & United Breweries, Australia), Prof Silvia Mastellone (University of Applied Science Northwestern Switzerland, Switzerland), Dr. Takashi Yamaguchi (Ricoh, Japan)

The panelists, covering a broad range of industry sectors ranging from mineral process industry, brewery and aerospace, shared their experience in carrying on successful collaboration between academia and industry with the ultimate goal of rendering advanced control solutions available for the advancement of a product or process. It was recognized that control is not currently used to its full potential and specifically that not enough of the control crown jewels are actively realizing their full potential in industry.

The goal would be moving from the present finite-horizon game in which each institution attempts to follow intermediate goals (fast publications in academia and fast products to market in industry) to an infinite-horizon game where long-term collaboration and sustainable high performance results are achieved. A number of major factors, that limit stronger collaboration, have been identified including gaps in the intermediate goals that define the success for each institution and the specialized knowledge and consequently language. The problem has a high degree of complexity and must be addressed in all its different aspects, in particular three key aspects have been identified: people, processes and tools. For each category a set of solutions has been proposed:

  1. People: Can we invest time and effort in training people to embrace both theoretical and application knowledge or to cooperate in order to realize a product or process with advanced technological features enabled by control?

On the topic of people the need for mediators has been identified as a key point; training mediators from academia or industry is an important step in order to bridge the gaps. Education of people remains the fundamental goal for university, nevertheless a stronger bond between academia and industry can support the student to be better prepared for a future in industry and well equipped with broader understanding of the control potential. It has also been discussed how innovation springs from collaborative effort by merging the different know-hows within the different communities. Besides mediators, other practical solutions have been suggested such as having more PhDs in Industry, but also lecturers working on application challenges.

2.  Processes: Which additional processes can lead to stronger interaction?

Industry needs integrated solutions that are robust, reliable, and easy to understand and maintain. Accessibility of the results has been highlighted as a challenge: Align goals, outputs, timeline and projects. Consortia and “Knowledge brokerage” events can provide the frameworks for both parties to learn challenges and tools developed. We also see differences in government funding models to stimulate cooperation and can adopt best practices from where this is done well. Finally publishing problems and challenges and focusing part of the research on implementation aspects can lead to a significant step forward in aligning the intermediate goals.

3.  Tools: What are the necessary tools to enable stronger collaboration?

Effort should be placed in creating tools that would facilitate the adoption of advanced solutions. This includes developing frameworks for testing advanced algorithms. It is important to go beyond simulation possibly having as an end goal a realistic prototype but also creating platforms that would offer the possibility of testing algorithms on real problems. This would enable a smooth transition from academic research to useable applications.

Panel Session 2: Advanced Control in Industry: The Path Forward

Panelists: Dr. Kazuya Asano (JFE Steel, Japan), Dr. Philippe Goupil (Airbus, France), Dr. Benyamin Grosman (Medtronic, USA), Dr. Angeliki Pantazi (IBM, Switzerland), Dr. Jaroslav Pekar (Honeywell, Czech Republic), Dr. Tariq Samad (University of Minnesota, USA), Prof. Ricardo Sanchez Peña (Buenos Aires Inst. of Tech., Argentina).

The panelists presented examples of successful implementation of advanced control in various industry sectors and discussed key points to be considered in developing next generation of advanced control and enabling the use of existing control technologies in industrial applications.

The Pilot Industrial Committee assessed the impact of several advanced control technologies, and as expected PID came out as a strong winner across industrial sectors, due to its versatility and simplicity in the implementation. Model predictive control and system identification were also widely recognized for impact. However, the control crown jewels such as nonlinear control, adaptive control, hybrid dynamical systems and robust control were at the bottom of the list, not because they do not have the potential to bring value but because their accessibility is limited and their complexity can constitute a barrier for adoption in most industrial sectors.

With the general consensus that the true test of a technology is its real-world impact, it is left as a challenge to make sure that such advanced solutions can fully realize their potential.

An important point to test and evaluate a technology is to possibly quantify the benefit achieved by introducing a new technology, what in a company would be referred as Net Present Value (NPV). This benefit can be described in terms of higher production and performance, decreased cost and downtimes, and other factors. Ultimately a better solution should bring a quantifiable value. In other words, the goal is to create customer value by applying advanced control. Any new solution should demonstrate key indicators (e.g., recurring cost saving, weight saving, reduced development cycle).

The path forward has been identified in two key directions:

  1. First, in order to adopt the existing control technology in the various industrial sectors, an effort has to be made from both parts in order to identify the potential impact of the control technology and render it user friendly. Industrial effort could provide academia with representative benchmarks.

Simplicity and usability of the solution plays a major role in defining if the solution will in the long term be adopted. This does not necessary mean that advanced methodologies cannot be adopted, but an effort has to be made toward rendering the solution user friendly and intuitive.

The problems and foreseen benefits are industry specific, so the challenge is how to combine a cross-domain technology such as control with domain-specific industries? Control should be able to expand and evolve with much flexibility according to the different sectors’ needs. This would require a holistic perspective integrating our original background as control engineers and deeper domain knowledge on processes and products. More on this topic is covered on the other panel session (see above).

2.   A second point was discussed on how control should further evolve as a discipline to meet the future challenges in technology.

Initially, as most systems were analog, the field of control was developed to solve challenges for analog systems. In a second phase, as computer and digital system were developed control became digital. In the new era cognitive computing systems will be able to learn and interact naturally with people. Control will need to evolve in order to serve this new generation of systems.

Additionally the future sees more digitalization and interconnectivity of systems, where the challenge is to orchestrate digitalized plans and systems. The new technological era opens a challenge to develop the new generation of control. It is up to the research community and practitioners to work together to make this step possible—and IFAC, through its newly established Industry Committee, must facilitate the interaction.


Jamming attacks: A major threat to controlling over wireless channels

Remote control and sensing over wireless communication has been continuously increasing. This trend will not slow down with so much expectation for the Internet of Things. However, the spread of wireless communication can create vulnerability to various control systems as it can easily be disrupted by Denial-of-Service attacks through jamming of transmissions. In this article, we provide a brief overview on this new critical issue and the current efforts made by researchers in IFAC.

Cyber security has become an important issue for the society. Information and communication technologies are heavily incorporated in many fields and yet they are exposed to cyber-attacks that threaten financial losses, environmental damages, and disruption of services used in daily life.

Recent research indicates that industrial control systems are no exception being under threats by malicious attackers. Communication channels used for transmission of measurement and control data are vulnerable against various types of attacks.

In this article, we focus on the so-called jamming attacks, which are Denial-of-Service attacks on wireless channels. Jamming attacks are perhaps the simplest types of attacks a control system may face, but they can be very dangerous. Generating a jamming attack does not require information about the internals of the control system. By simply emitting an interference signal, a jamming attacker can effectively block the communication on a wireless channel, disrupt the normal operation, cause performance issues, and even damage the control system.

Typically jamming attacks are classified in two categories: active jamming and reactive jamming 1. An active jammer’s goal is to keep the channel busy regardless of whether the channel is being used or not. For example, the attacker can continuously send strong radio signals to increase the signal-to-noise-plus-interference ratio at the receiver side. A reactive jammer on the other hand observes the channel activity and starts jamming only when the channel is being used.

One of the key issues that make jamming attacks a big threat is that they are easy to launch. As a recent survey2 indicates, jamming devices that can target various wireless technologies including GPS, mobile communications, and Wi-Fi are already available for purchasing. It is mentioned that in the case of Wi-Fi, even special devices may not be needed as computers can be turned into jammers.

On top of this, increasing security against jamming may not always be easy. Certain types of stealthy jamming attacks can cause significant amount of failures in packet delivery on a wireless channel without being detected. One of the ways of mitigating jamming attacks is to use frequency hopping methods, where transmissions are made over a random sequence of different frequencies. But a powerful attacker can still overcome such methods3,4.

There are a few cases of jamming incidents that indicate the criticality of the issue. In 2015, cars parked near a retail store could not be unlocked remotely using key fobs, which indicated the presence of a jammer that interrupted the key fob signals5. Another much more concerning case involves an explosion of an oil pipeline. A recent report6 on the explosion of Baku-Tbilisi-Ceyhan oil pipeline in 2008 hints the possibility of cyber-attacks that involved jamming of satellite communications to prevent transmission of alerts.

It appears that jamming will remain to be a major issue. Researchers point out that the next generation air traffic communication systems7, vehicle platoons8, the satellite navigation, and the power market9 are all susceptible to jamming attacks. With the expansion of the Internet of Things, the use of wireless communications is rapidly increasing in many fields and jamming is becoming a bigger threat. This prompts an important question: How can we be prepared for jamming attacks?

Within IFAC, researchers are addressing this question from the perspective of control engineering. These efforts include:

  • evaluation of the performance of existing control systems under jamming attacks, and,
  • development of new systems that are resilient to jamming attacks.

We briefly introduce these lines of research below. It is interesting that these researches deal with cyber attacks, but the approaches are not based on information technology oriented methods.

In a typical wireless networked control system setup, remotely located components exchange data with each other over wireless medium. Some researchers evaluate the performance of wireless networked control systems by investigating the level of jamming that they can tolerate without having major issues such as disruption of operation. Since emitting jamming signals requires energy, it is costly to the attacker. It would be ideal if a control system can operate even under attacks from an attacker with large resources.

The challenge in evaluating the performance of a control system under jamming attacks is that we cannot know exactly when jamming attacks may start/end. Another issue is that the power of the jamming signal used by the attacker may be changing each time there is an attack. Therefore, it is also not clear how likely a transmission failure might occur when there is jamming. One of the approaches to understand the effects of jamming even in this uncertainty is to consider the worst-case scenarios that may happen.

To identify the worst case, it is of interest to explore the question: What would be the optimal strategy of the attacker? The attacker would want to disrupt the normal operation of a system without using excessive resources. For instance, in several research articles, jamming energy is considered as a constraint in the problem, and it is assumed that the attacker tries to make as much damage as possible within specified energy limits. Another approach is to consider jamming energy as a part of the attacker’s cost function in an optimization problem where the attacker tries to minimize the energy usage. Some researchers also use game-theoretic methods for understanding how optimal strategies of the attacker would relate to the optimal strategy for the transmission of the measurement and the control data.

Designing control systems that are resilient to jamming attacks is also an important research theme within IFAC. For instance, some researchers studied control systems that incorporate mechanisms to detect the presence of an attack. Furthermore, recently researchers also developed so-called event-triggered controllers to pick times of data transmissions so as to reduce the effect of jamming on the operation. If a particular transmission attempt faces a jamming attack, a new transmission time can be scheduled based on the performance requirements.

Literature on the cyber security of control systems indicates that as an attacker becomes more knowledgeable about the system, in addition to jamming, more sophisticated attacks may also become an option. The attacker can alter the data being transmitted, and in certain cases inject false data into the system without being noticed. In addition, control systems may also face replay attacks, where the attacker intercepts the transmissions and sends a valid but old measurement/control data to cause damages while still following the communication protocol.

As the risk of jamming and other types of attacks is increasing rapidly, ensuring cyber security of control systems will be a challenge of growing importance.

1. https://doi.org/10.1145/1062689.1062697
2. http://www.theiet.org/sectors/information-communications/signal-jamming.cfm
3. https://doi.org/10.1016/j.adhoc.2009.04.012
4. https://www.kth.se/social/files/56112825f276544047e235c7/freq_hopp_long.pdf
5. http://www.techrepublic.com/article/wireless-jammers-cast-a-dark-shadow-on-iot-security/
6. https://www.bloomberg.com/news/articles/2014-12-10/mysterious-08-turkey-pipeline-blast-opened-new-cyberwar
7. https://doi.org/10.1007/978-3-642-38980-1_16
8. https://doi.org/10.1109/ITSC.2015.348
9. https://doi.org/10.1109/GLOCOMW.2011.6162363

Article provided by:
Ahmet Cetinkaya
Ahmet Cetinkaya, Postdoctoral Research Fellow
Hideaki Ishii, Associate Professor 
Tokyo Institute of Technology
IFAC TC 1.5 on Networked Systems

A new years message from the IFAC president

Dear IFAC Social media followers, Dear Friends and Colleagues,

Best wishes to you and your loved ones for the New Year. Our wish is that you will enjoy 2017 in good health, and that you will find it peaceful, prosperous and rewarding.

The social media platforms of IFAC are steadily gaining followers and reaching new audiences. It is great to count you among IFAC social media followers and I wish to express my sincere thanks for your continued support and your important contribution to the activities of IFAC

I warmly encourage you and encourage the IFAC Technical Committees to support the IFAC Social Media Strategy by following and retweeting/sharing the IFAC blog and participating in the discussions at the IFAC Twitter and Facebook accounts. Interested blog contributors are kindly invited to contact IFAC Social Media Liaison Jakob Stoustrup at jakob@es.aau.dk

I take great pleasure in inviting you to attend the 20th IFAC World Congress which will be held in Toulouse, France in July 2017. Everything is on track for what promises to be a highly rewarding, memorable and enjoyable event. I am happy to inform you that the IPC has received more than 4200 submissions and I look forward to meeting you all in Toulouse, France at the flagship event of our Federation.

With best wishes,

Janan Zaytoon

IFAC President


Closed-loop weight control: the power of feedback

In this article, a closed-loop approach to human body weight control is presented. The main purpose of the article is to demonstrate that applying feedback has significant benefits over conventional open-loop techniques suggested in the rich health literature on the subject. In particular, a closed-loop approach is robust to the strongly adaptive mechanisms of the human body and to disturbances of various kinds. Also, in contrast to conventional approaches proposed in the health literature, the presented method based on feedback does not depend on any specific diet. In fact, the approach can be applied to any diet preferred by the subject.

DISCLAIMER: The proposed method has not been approved by medical doctors. If applied incorrectly, it can potentially cause significant health issues. It is strongly recommended not to pursue the experiments described below without consulting a physician.


For a large and increasing proportion of the world population, overweight and obesity cause a wide range of health issues and in particular are leading causes of premature death. The World Health Organization (WHO) defines overweight as a Body Mass Index (BMI) greater than or equal to 25 kg/m2  and obesity as BMIs larger than 30 kg/m2. According to WHO, worldwide obesity has more than doubled since 1980. In 2014, almost two billion adults were overweight. More than 40 million children under the age of five were overweight or obese in 2014.

In part for health reasons, dieting has been recommended by medical doctors and other health experts for centuries, at least dating back to the early 18th century, e.g. by the English doctor George Cheyne, who based on personal experience recommended diets for anyone suffering from obesity or overweight as described in his 1724 report, An Essay of Health and Long Life. There is no shortage of descriptions of diets in contemporary literature, ranging from esteemed medical journals to popular magazines and newspapers. In Western societies, a significant proportion of the population has been following one or several such diets for longer or shorter periods of time. In addition to health challenges, overweight/obesity also have significant psycho-social effects.

In this article, we shall provide a control theory perspective on weight control. A simple feedback algorithm will be described below along with experimental data verifying the algorithm.

Modeling weight gain

The dynamics of human body weight is by far dominated by three factors:

  • Food and drink intake, instantaneously causing a (partly temporary) weight gain
  • Excretion, instantaneously causing a (partly temporary) weight loss
  • Metabolism, slowly but steadily causing (temporary) weight loss

Recent research has shown that exhaust from lungs (part of excretion) is a major factor in weight loss. Burning 10 kg human fat requires inhalation of 29 kg oxygen. This produces 28 kg carbon dioxide and 11 kg water. As food and drinks are temporarily stored in the human stomach and bowels, the body weight is instantaneously increased with the weight of any food or drink consumed. Metabolism is usually divided into catabolism and anabolism, where catabolism is the process of breaking down organic matter and anabolism is the reverse process of constructing proteins and nucleic acids. Metabolism is catalyzed by enzymes and metabolic rates can be strongly time and state dependent, governed by such catalyzing enzymes. In eukaryotes, such as homo sapiens, metabolism is connected to a series of proteins in mitochondria. As a very coarse model, the level of metabolism at any given time is therefore proportional to the number of mitochondria. The number of mitochondria depends strongly on tissue types and therefore on body distribution of these, but in the larger picture the number of mitochondria is positively correlated with the number of cells in the body, which is finally approximately proportional to the body weight. In summary, this rough reasoning leads to the following extremely simple model for body weight dynamics:

\[\frac{dw(t)}{dt} = -\alpha(w,t)\cdot w(t) – e(t) + f(t)\]

where \(w(\cdot)\)>0 is the body weight, α is a positive parameter, depending on state and time, that governs the metabolism,  \(f(\cdot)\) is the food/drink intake function, which can only attain non-negative values and \(e(\cdot)\) is the excretion function, which can only attain non-negative values. Clearly, this model cannot be expected to be accurate in open loop. E.g. it does not capture the difference in dynamics between catabolism and anabolism, which would require a higher order model. Below, however, we shall argue that this very simple model surprisingly suffices to understand and design closed-loop behavior.

Proposed control algorithm

The approach suggested in this article relies on the following assumptions:

  • Body weight is a measureable state variable
  • Food weight is a controllable input
  • Metabolic rates can be time-varying, but are bounded from below, α≥αmin

Based on these assumptions, a simple feedback control law that takes body weight as its measurement and specifies food intake as the control signal can be devised:

\[F(t) = r(t+T) – w(t)\]

where \[F(t) = \int_t^{t+T}f(\tau)\;d\tau\]is the weight of food and drinks consumed during a meal starting at time \(t\) and ending at time \(t+T\); further \[r(t+T)\] is the control reference at time \(t+T\) (the end of the meal). Since  \(f(\cdot)\) is a non-negative function, the reference \(r(\cdot)\) has to be chosen larger than \(w(\cdot)\) at all times.  In practice, the algorithm can rely on (suitably conservative) estimates for some meals, if an insufficient number of measurements are available, as long as the integral constraint is met during a day – please, see experimental data below.

A consequence of the above is that a reference with a weight loss demand larger than that dictated by the metabolism (catabolism) at any period of time, is infeasible. In practice, however, the reference weight loss should not only be marginally smaller but significantly smaller than the metabolic weight loss between two consecutive meals, as otherwise the body will not get a sufficient amount of nutrients for sustaining normal operation, and potentially health will be challenged. Further, when choosing a reference, it should be taken into account that α tends to be monotonically increasing/decreasing with a monotonically increasing/decreasing w, i.e. metabolism tends to adapt in order to changing weight (this is well-documented in the medical literature).

Experimental verification

The algorithm described above was applied during an experiment with a duration 44 days with the author of this article as the subject. A reference was chosen that had a constant slope for the first 31 days (one month), followed by a constant value. The initial value of the reference was chosen as the initial condition of the body weight. The final value of the reference was chosen as a body weight that would bring the BMI from an initial 26.1 kg/m2 (mild overweight) down to 23.8 kg/m2, i.e. well into the normal (non-overweight) area. In summary, this schedule inferred a weight loss of 7.4 kg during the 31-day weight loss period, i.e. a daily decrement of 239 g, followed by a static weight condition for 13 days.


Figure 1: Results of closed-loop weight control experiment


The experimental results can be seen in Figure 1. In practice, the algorithm was carried out by three daily body weight measurements: a morning measurement, a measurement immediately before the last meal of the day, and a late evening measurement for validation. The breakfast and the lunch meals were chosen to weigh approximately half the margin between the (known) upcoming evening reference and the morning measurement. That left about half the food intake for the evening meal, which was weighed on the plate and calibrated to match the remaining margin up to the scheduled reference. With this approach, the reference was normally reached by each evening measurement. Figure 1 shows a few overshoots. These happened at events where adhering to social code prohibited the meals from being weighed, so estimates had to be applied instead. Also, drinks taken after the last meal were not calibrated, which gave minor deviations. The undershoots in the beginning of the experiment are deliberate.

It is interesting to note that as the actual weight approaches the target, the metabolic cycles decrease significantly in amplitude. This is probably due to body responses that change the metabolic rates. It is likely that such a mechanism has developed evolutionarily to respond to periods of food scarcity. In contrast, metabolism is seen to increase significantly close to the end of the experiment, where the flat part of the reference has allowed a much higher food intake, causing the body to respond by what could perhaps resemble a food surplus scenario in an evolutionary context. Throughout the experiment, the conscious awareness of the subject provided another level of feedback, as the subject gained experience with the impact of his exercise, food composition, etc.


The closed-loop weight control approach proposed in this article has the virtue of offering deterministic results, based on the single assumption that the algorithm is followed strictly. It should be noted that the method is completely independent of any specific composition of the diet. In fact, although the daily weight loss of the experiment was significant, the involved diet throughout the experiment included a proportion of energy intensive food components such as chocolate and red wine (the red wine is discernible in the experimental results, causing the metabolic cycles to reduce significantly in two instances). Also, it should be noted that due to monotonicity properties of metabolic systems, any diet that has the same weight loss as in the described experiment, will have the exact same average food intake, provided the food has the same distribution of proteins/carbs/fats.

A limitation of the proposed approach is that it does not address nutritional adequacy aspects of diets. If one tries to lose weight too fast, or put a goal for a too-low ultimate steady state reference, one’s health will suffer. In practice the reference trajectory should be “reasonable.”

On the other hand, as an important conclusion of this article, feedback can be combined with any given diet, providing a layer of mathematically guaranteed weight loss to a physiology based diet that would typically be composed from a health perspective. The main approach to a healthy body will always be a healthy lifestyle with healthy food and lots of exercise. However, for anyone on a diet, there is simply no reason not to embed the diet in a closed-loop approach and take advantage of the power of feedback!


Article provided by:
Jakob Stoustrup
Department of Electronic Systems
Automation & Control
Aalborg University, Denmark
IFAC Technical Board

Control of Dynamical Systems by Partial Differential Equations


Many phenomena are common to us all, but the way they work might be less well known. Why? They are dynamic systems! What is a dynamical system? Generally, it means that such kind of systems are described by Partial Differential Equations (PDE), and in order to study them, we have to understand their properties and we need to control some of them. We need to simulate the control developed in order to be sure that it fits exactly what it is expected, or to understand how a phenomenon is going on!

This intriguing area will be studied in a recently granted project “DYCON–Dynamic control”, which aims to develop a multifold research agenda in the broad area of Control of Partial Differential Equations (PDE) and their numerical approximation methods by addressing some key issues that are still poorly understood. To this end we aim to contribute with new key theoretical methods and results, and to develop the corresponding numerical tools and computational software.

The field of PDEs, together with numerical approximation and simulation methods and control theory, have evolved significantly in the last decades in a cross-fertilization process, to address the challenging demands of industrial and cross-disciplinary applications such as, for instance:

  • The management of natural resources (water e.g.),
  • Meteorology (make better weather predictions e.g., which involves big data problems and related numerical problems),
  • The oil industry (oil forage e.g., whose main problem is the friction at the bit),
    Biomedicine (cancer strategy via immunotherapy),
  • Human and animal collective behaviour, (understand behaviour of bees in order to anticipate their extinction, relations and interactions between several species) etc…

The ERC Advanced Grant DYCON project identifies and focuses on six key topics that play a central role in most of the processes arising in control applications, but which are still poorly understood: control of parameter dependent problems; long finite time horizon control; control under constraints; inverse design of time-irreversible models; memory models and hybrid PDE/ODE models, and the links between finite and infinite-dimensional dynamical systems.

These topics cannot be handled by superposing the state of the art in the various disciplines, due to the unexpected interactive phenomena that may emerge, for instance, in the fine numerical approximation of control problems. The coordinated and focused effort that we aim at developing is timely and much needed in order to solve these issues and bridge the gap from modelling to control, computer simulations and applications.

The ERC Advanced Grant DYCON provides resources to researchers willing to contribute to these endeavours within the research team led by Enrique Zuazua at Universidad Autónoma de Madrid-Spain.

Researchers interested in cooperation are welcome to get in contact with Enrique Zuazua (enrique.zuazua@uam.es, www.enzuazua.net).

There will be openings and opportunities for researchers in all career stages: internship PhD students from other centers and groups, PhD and postdoctoral contracts and one-quarter visiting positions of confirmed researchers.


Download the article
Word document  with references can be downloaded here (100KB)

Article provided by:
Valérie Dos Santos Martins
Laboratoire d’Automatique et de Génie des Procédés, 
Université Claude Bernard Lyone
TC 2.6. Distributed Parameter Systems

A survey on industry impact and challenges thereof


At its 2014 World Congress, IFAC launched a “Pilot” Industry Committee with the objective of increasing industry participation in and impact from IFAC activities. I chair this committee with the support of Roger Goodall (Loughborough University, UK) and Serge Boverie (Continental, France) as co-chairs. This committee was established as an outcome of an Industry Task Force led by Roger Goodall in the last triennium.

In 2015 the committee undertook a survey of its members to get their views on the impact of advanced control and challenges associated with enhancing the impact. The survey had two questions. 23 of our 27 members then (excluding the chair) responded. The majority of the membership is either currently with or has prior affiliation with industry; all others have had substantial industry involvement as well. Most of the members were nominated by IFAC National Member Organizations and Technical Committees.

Although limited in many ways, I thought the survey responses would be of interest to the controls community.

Survey Question 1: Impact of Specific Advanced Control Technologies

First, we asked for members’ perceptions about the industry success (or lack thereof) of a dozen advanced control technologies. PID control was also included in the list for calibration purposes. A glossary was included with the survey, listing topics covered under each technology. Members were asked to assess the impact of each of these technologies by selecting one of the following:

  • High multi-industry impact: Substantial benefits in each of several industry sectors; adoption by many companies in different sectors; standard practice in industry
  • High single-industry impact: Substantial benefits in one industry sector; adoption by many companies in the sector; standard practice in the industry
  • Medium impact: Significant benefits in one or more industry sectors; adoption by one or two companies; not standard practice
  • Low impact: A few successful applications in one or more companies/industries
  • No impact: Not aware of any successful deployed real-world application

The results: The control technologies are listed below, in order of industry impact as perceived by the committee members:

Rank and Technology High-impact ratings Low- or no-impact ratings
1. PID control 100% 0%
2. Model-predictive control 78% 9%
3. System identification 61% 9%
4. Process data analytics  61%  17%
5. Soft sensing 52% 22%
6. Fault detection and identification 50% 18%
7. Decentralized and/or coordinated control 48% 30%
8. Intelligent control 35% 30%
9. Discrete-event systems 23% 32%
10. Nonlinear control 22% 35%
11. Adaptive control 17% 43%
12. Robust control 13% 43%
13. Hybrid dynamical systems 13% 43%

On the face of it, these results are disappointing. No advanced control technology is unanimously acknowledged by industry-aware control experts as having had high industry impact—90 years after its invention (or discovery), we still have nothing that compares with PID! It’s also concerning that the “crown jewels” of control theory appear at the bottom of the list.

However, the fact that all the technologies had at least some positive assessments suggests that the impact could well be higher than indicated: Many control scientists and engineers are likely not aware of the impact of control technologies outside the application domains of their experience. Thus the problem may be as much the perception as the reality.

Survey Question 2: Issues and Challenges with Industry Impact

The second question listed a number of statements and asked respondents to indicate their level of agreement with each. Agreement could be indicated as strongly agree, agree, neutral, disagree, or strongly disagree.

The statements and the levels of agreement are tabulated below. I have also noted any significant differences of opinion between the industry and academic members of the committee.

Statement Agreement Disagreement Academia/Industry
Industry lacks staff with the technical competency in advanced control that is required for high-impact applications 83% 4%
Control researchers are much poorer than researchers in other fields at communicating their ideas and results to industry management 26% 30%
The maturity or readiness level of results of advanced control research is too low for attracting industry interest 57% 22% 42% of industry respondents but no academic respondent disagreed
Advanced control has limited relevance to problems facing industries and their customers 4% 65%
The conflict between industry deadlines and academic research timelines is worse in control than in related engineering fields 30% 35%
Control researchers place too much emphasis on applied mathematics or advanced algorithms whereas successful industry applications require deep domain knowledge 83% 13%
Control researchers place too little emphasis on plant/process modeling and model-development methodologies 57% 17% No one from industry disagrees 30% of academics disagree
Students in control (undergraduate and graduate) are not sufficiently exposed to problems in industry 70% 13% No one from industry disagrees 30% of academics disagree
The academic control community is not seriously interested in collaboration with industry 26% 39% 33% of industry respondents but only 11% of academic respondents agree
There is no problem—advanced control is successful and appreciated in relevant industries 13% 83%


A clear message is that domain understanding/modeling is crucially important but not adequately pursued and taught. Neither expertise nor experience in advanced control per se is sufficient to realize industry impact.


This survey wasn’t, and nor was it intended to be, scientific or comprehensive, but I and my fellow committee members have found the results thought- and discussion-provoking. We are continuing to explore the challenging problem of industry impact from control research. Among other outputs, we expect to recommend specific enhancements to IFAC events, publications, and volunteer groups. Your feedback is welcome and will be appreciated!

Download the article
Word document  with references can be downloaded here (400KB)

Article provided by:
Tariq Samad
Senior Fellow
Honeywell/W.R. Sweatt Chair in Technology Management
The University of Minnesota
Vice chair, IFAC Technical Commitee

Consumer-driven automation for smartening the grid


Consumers are expected to play a considerable greater role in smart grid deployment and it is crucial to boost their awareness of this more active role. Smart grid is a great opportunity for all consumers, whose involvement in demand side management will significantly speed up the development of a smart grid market. The way the energy is used has to be revolutionised and, to actualize that, consumers need to understand what benefits they will achieve and how to change their behaviour to gain those benefits. All the players in the electricity system need to learn how to engage and effectively educate consumers, and improve their trust. We do not know the best way to make this happen yet, but we do know the highly negative impact of inadequate consumer engagement on future deployment plans. Thus, control solutions and automation systems for demand side management necessitate taking consumers into account, their preferences, their needs and uncertainty in their behaviour.

The next-generation electric grid needs to be smart and sustainable to deal with the explosive growth of global energy demand and achieve environmental goals. To effectively smarten the grid we need to rethink the roles and responsibilities of all players in the electricity system. This smartening is a progressive and revolutionary process (Figure 1). However different settings will be around the world and deployed at different rates, the use of information and communications technology to monitor and actively control generation and demand in near real-time is indisputably a common feature. Therefore, control and automation are essential for enabling consumers to actively support the grid.

Figure 1. Smarter electricity systems (source: IEA, 2011) [Click on image to view larger version]

The increased control over the network can enable a wider, more sophisticated range of smart methods and innovative schemes, such as demand response and smart energy management systems for buildings, to facilitate local management of demand and generation. Demand response includes both manual and automated consumer response, smart appliances and thermostats, which are able to respond to price signals, or carbon-based signals. These smart devices are connected to an energy management system or controlled directly by the utility or a system operator. Smart energy management systems for buildings need to incorporate the user into the design and thus be responsive to their occupants in order to improve their comfort and allow smart appliances and heating systems to be on the market and respond to price signals to help decreasing the electricity bills. The benefits for consumers can be diverse, e.g., reduction of the electricity bill, improving of living conditions, supporting a more environmentally friendly energy behaviour.

In particular, smart energy management systems are required to be able to:

  • respond to signals from the grid and take action on this basis (e.g., decreasing energy use when prices are high or automatically shifting consumption to times when prices are lower);
  • manage local generation facilities, such as solar panels, and fed back into the grid any energy
  • optimally schedule storage devices, which can be used to balance out the smart grid.

Those advanced and innovative energy management systems make buildings smart and we can claim that a smart grid cannot exist without smart buildings. Hence, there will be more and more active roles for consumers of different sizes to play in a smart grid, for instance:

  • Residential consumers can choose among different tariff schemes and optimally shift smart appliance demand away from peak times through smart meters and energy management systems;
  • Industrial and commercial consumers can participate in the energy market through
    a wide range of demand response schemes;
  • Generator owners can participate in demand response schemes and the market by supplying needed energy to the grid.

Novel control and automation systems are becoming quite widespread, although standardised solutions are still not available, which means that expensive tailored configuration are required. This clearly limits the engagement of consumers, in particular small-scale consumers. In addition to designing and deploying control and communication solutions more affordable to a wider range of consumers’ sizes, effective motivational factors must be explored and thoroughly examined (e.g., environmental concerns, better comfort, control over electricity bills). The risk here is that consumers who do not make the savings expected from their behavioural change might consider the whole experience disappointing and frustrating.

Accurate, systematic and methodical research and evaluation are still needed to identify the optimal methodology to understand better the interaction between consumers and energy market, as well as the effect of enabling technologies for smart grid deployment.

A persistent behavioural change is vital to effectively enable smart energy technology development. We still need an answer to the following questions:

  • Is there an optimal mix of behavioural change, consumer feedback and automation technologies?
  • How much customer education is required and what are the best approaches?
  • Which types of automated demand response schemes are most useful to different types of customers (residential, commercial, industrial)?

Research groups, along with industry and governments, need to design and test more consumer-focused control solutions that can foster large-scale consumer behaviour change.


Download the article

Word document  with references can be downloaded here (400KB)

Article provided by:
Alessandra Parisio
School of Electrical and Electronic Engineering
The University of Manchestervice
IFAC Technical Committee 9.3 (Control for Smart Cities)

Automation and Bionic Technologies to Assist an Ageing Population

Automation and Bionic Technologies

Over the last three decades, the pervasiveness of engineering communication systems, enabled by the development of cheap sensing and computation capabilities, lightweight battery storage and low power actuation systems, has exploded. Concurrently, advances in health care and medicines have and will continue to lead to populations with ageing demographics throughout the industrialised world.

One impact is that the current generation of retirees will live longer and is arguably the first to have grown up with communication and automation technologies as an integral part of everyday life. As a consequence, there are significant opportunities to develop assistive technologies based around automation systems that provide both better quality of life and lower medical costs as user acceptance of the developed technologies will likely be high. Of critical importance is the engagement with the likely users during the development process to ensure that interfaces and social aspects are properly identified. Of particular relevance is the need to avoid overly intrusive approaches, by capitalizing on the embedded nature of suitable technology, and to undertake co-design with the target groups.

The opportunities for automation and ICT technologies then range from non-intrusive detection of incidents, subsequent intervention through partial or potentially complete mitigation of damage through appropriate actions, to assistance in recovery from incidents through appropriate rehabilitation.

As an illustration of the potential benefits of seamless integration of automation, (and one of many that could be highlighted) we can consider falls in the elderly. These represent by far the most common preventable incident with serious consequence for over 65s, and account for a clear majority of hospitalisations in this age group.

From a detection standpoint, there is an opportunity to leverage smartphone uptake (recent surveys have indicated the vast majority of the elderly today own smart phones). The integration of multiple sensors into phones has been already exploited by a number of apps for background fall detection. In essence, these use relatively simple algorithms with thresholds set on acceleration measurements from the inertial measurement units integrated in the phones to trigger the detection of an event. With the introduction of smart watches carrying their own sensors false classification rates will be lowered, as it is easier to prevent false classifications if sensor position is known to be constant with respect to the user’s body.

Naturally, the detection of a potentially injurious event should be coupled with the ability to mitigate the damage. Already in the existing applications, detection of a potential fall via the phone is used to trigger an alert sent to a designated person(s) along with GPS data describing the wearer’s location, typically with a short delay after a fall is detected, where the owner has the opportunity to correct for a false positive classification. Smart phones can also automatically undertake a tiered call response cycle under these conditions, from family, to emergency services.

One of the most serious consequences of falls amongst the elderly is fracture of the femoral head – a more common occurrence as bone density and reaction time of the person (which would aid in in situ fall mitigation) are typically both diminished with age. Furthermore, the complications associated with the surgical procedure for femoral repair lead to significantly increased mortality risk in the elderly. Mitigating the effect of a fall through cushioning the femur in at-risk areas is possible through passive protection, however active devices operating on an air-bag principle activated on fall detection through smart sensing and classification may provide a less intrusive system with better damage mitigation.

Of even greater potential than mitigating a fall is the design and development of devices that can prevent it. Such a system requires early detection before action is taken, thereby relying on more sophisticated sensor feedback and smarter integration with the wearer’s natural actuation capability, so as to provide assistance when required. Already there are commercial prototype systems such as the Hybrid Assistive Limb (HAL) developed by Cyberdyne (no not (yet) the Terminator company!) aiming to do so, which rely on conventional robotic architectures to provide functionality.

Usability will only increase as the man-machine interface is continually refined. Assistive technologies include mechatronic aids, which can assist rather than entirely prevent falls: a lightweight mechanical assistive support device could achieve this, as an intelligent reconceptualisation of the once familiar polio leg braces. However the advent of wearable sensors and soft robotics offers perhaps greater potential due to reduced weight and subsequent reduced on-board power requirements, as demonstrated in prototype systems such as those under development at various research institutes.

Finally, bio-mechatronics offers great potential in targeting rehabilitation strategies towards individual patients. The opportunities include using available sensing technologies to record real world activity, which can be interpreted by clinicians to gauge and improve recovery. Assistive therapeutic aids which initially enhance patient capability through EMG feedback and transition across to systems that retrain or strengthen muscles, thereby reducing the probability of further injurious events. Such approaches may partially alleviate the need for physiotherapy to be conducted wholly onsite, thereby reducing treatment costs and also improving recovery rates.

So while medical and health sciences can claim some responsibility for creating the (nice) problem of an increasing ageing segment of the population, it is perhaps engineering and automation technologies that are going to play a major role in assisting that population to continue to live active and fulfilling lives. It is, however, critical that the age groups concerned play an active participatory and responsible role in the codesign of devices and automation being created for their benefit.

Download the article

Word document  with references can be downloaded here (1Mb)

Article provided by:
Prof Chris Manzie 
Department of Mechanical Engineering
University of Melbourne
IFAC TC 4.2 (Mechatronic Systems) and 7.1 (Automotive Control)

Traffic Management in the Era of Vehicle Automation and Communication Systems (VACS) Do you give up control?


Traffic Control Centres (TCC) are expensive pieces of infrastructure tasked with the problem of sensing, surveying, monitoring, and actively interfering with traffic flow in road networks.

Figure 1 provides a broad overview of how a TCC operates. The controlled system is
a network of roads equipped with sensors and control effectors. Two-way flow of information from and to the field is effectuated by the IT infrastructure maintained by the TCC. Network operators are managing traffic in real time based on streams of information converging to the traffic control room. They have to decide which objectives and policies to support and how to implement them by managing the available control device.


Figure 1: schematic of a TCC.

This is a most challenging and highly complicated task, encompassing diverse hardware and software systems, which have to be operated following specific regulations and procedures, in support of policies and objectives defined by network operators or wider political bodies. The complexity of the traffic flow management problem is due to the often chaotic nature of human behaviour, the diverse needs generating the individual trips, the constraints imposed by regulations, e.g. safety, and the objectives TCC pursue, e.g. delay minimization or emissions reduction.

Different control architectures can be conceptualised for performing the same tasks. Currently, the most common architecture adopted by TCC owners is that of a centralised control structure, allowing room for decentralised operations under strong supervision. A lot of money have been invested in this kind of infrastructure resulting mostly in static networks of sensors (loop detectors, CCTV etc.) and control effectors (traffic lights, variable message signs etc.). Usually, it is within this framework that control systems for particular traffic management applications are designed.

With the advent of highly equipped vehicles and vehicle automation, Vehicle Automation and Communication Systems (VACS) are changing the system architecture of traffic management. VACS are treasure coves of information as a lot of data can be extracted that can help address a variety of needs, e.g. commercial, infotainment and traffic management. From the control engineering approach such information is of little help unless it is explicitly used for positively interfering with traffic in real time.

In this sense, VACS are becoming both the sensor and the control device. They are both the means of information collection and transmission, and of actively interfering with traffic. Operating within a highly robust, secure and high-performance communication network, static sensors and control systems will become obsolete and a memory from the past or at a best a fall-back system. Fundamentally different operational requirements are in effect compared to those of centralised architectures posing new challenges for control design of network-wide vehicular flow.

The control technology for completely automating a vehicle is largely available.
Of course there are challenges, see e.g. a previous entry in IFAC’s blog (Link here). However, going from the individual vehicle to the aggregate behaviour of several thousands of vehicles and control of their collective interaction, is an entirely different control problem and in many respects more difficult to address. A fundamental change in thinking tailored for this new road / communications infrastructure / vehicle / driver system is necessary.

Many different scenarios can be envisaged, including:

  • The compulsory intervention by a TCC authority to vehicle controls. This implies that full control of the vehicle is delegated to a traffic authority. Acceleration, speed and position trajectories are decided by a higher level system supervising an area and deciding on the optimal, according to some societal notion of cost, vehicle operation. Dedicated lanes for segregating manual and autonomous vehicles could be used as well, although this is very difficult particularly for urban environments.
  • Partial intervention by a TCC authority. In this case vehicle control is assumed (or partially assumed) by a traffic authority should certain conditions arise, e.g. a congested road section or around an area near the approaches of an intersection.
  • Freely acting informed drivers. In this case, it is the drivers’ intelligence that takes over as a regulator of traffic under the influence of information communicated to them through an appropriate human machine interface. This scenario does not exclude the use of autonomous vehicles, but the decision of allowing a traffic authority access and control of a vehicle is left at the driver’s discretion.

Are you ready to give up control of your car for the sake of traffic management? Are you willing to delegate your vehicle’s control to a different authority, other than you?

Although the answer seems to be “yes” when this question arises in the context of the individual vehicle platforms, it may not be so when it is posed in the context of everyday commuting and travelling. Leaving aside institutional and legal issues, there is this question of whether people will accept losing their freedom of action operating their own car. There are situations where a “yes” or a “no” seem to be clear. When you are stuck in a solid block of congestion and you are immersed in a stop-and-go situation, it seems much preferable to either use the car as an office and work on the computer or as a TV set and watch a movie, leaving the vehicle to crawl its way to the destination. When riding in the countryside, a lot of people would respond with a “no” as they would drive manually themselves just for enjoying the experience.

But what happens when while commuting to work you believe that what is suggested or the way your vehicle is operated (lets say by a TCC) is not the best for you? It may be the best on a societal benefit level (although not necessarily so), i.e. the “common good”, but not on an individual level. Many people will answer “no” to this question, irrespective of whether we think of this as an egoistic response. Furthermore, the very notion of been forced to allow access and delegate control of an object considered private may be unacceptable by a lot of people from the general population. They cannot be neglected nor their choice be banned since they are legitimate road users. Their existence shapes the properties of the traffic flow process and hence they affect control design. In other words, there are strong cultural issues involved, which affect the efficiency of any large area traffic control design.

Designing vehicle based control systems supporting autonomous operations requires focusing primary on the individual vehicle; but designing network-wide traffic management controllers requires focusing on the broader picture of spatio-temporal traffic dynamics and on the way individual vehicles interact with other vehicles and the infrastructure. All three scenarios outlined pose daunting challenges from the technical side, even if autonomous vehicles allow us to treat them as “ballerinas” in the daily commuting dance. The scenario of freely acting informed drivers, although the most challenging of the three, seems the most appropriate, politically rewarding and easier to promote to the public.

Download the article

Word document  with references can be downloaded here (1Mb)

Article provided by:
Apostolos Kotsialos apostolos.kotsialos@durham.ac.uk 
School of Engineering and Computing Sciences
Durham University, United Kingdom
IFAC TC 7.4 (Transportation Systems)
« Older posts

Copyright © 2018 IFAC blog page

All rights reserved unless otherwise explicitly indicated. — Up ↑