In the future, mobile robots will be supporting our efforts in safety-critical tasks of search and rescue (localizing people in burning buildings), autonomous navigation (self-driving cars), and surveillance (tracking adversarial behavior). To complete the tasks, robots with heterogeneous capabilities (speed, sensing, payload) will dynamically form teams, agree on navigation and other action plans, and act. The robots will rely on a distributed intelligence, communicating only with their neighbors via wireless networks. And following the current technological paradigm, the robots’ learning and control capabilities will be software-driven. Overall, the robots will form a cyber-physical network, which I henceforth call an Internet of Robotic Teams (IoRT).

However, as any current cyber-physical system (computers, power grids), the IoRT will be vulnerable to deceptive and denial-of-service (DoS) attacks and failures that will compromise the robot teams, their plans, and their actions (Fig. 1) —the attacks and failures may be transduction or physical attacks, or even algorithmic deceptive failures, lying beyond the reach of cybersecurity, and of classical estimation and control. The National Academy of Engineering has listed the need for a robust autonomy against attacks and failures as one of the 14 grand engineering challenges of the future.

Fig. 1: The future Internet of Robotic Teams (IoRT) will be vulnerable to attacks and failures that will compromise the multi-robot teams, their plans, and their actions: respectively, attack A1 removes robots from the formed multi-robot teams; instead, A2 removes robots once the robots have agreed on future actions plans, such as navigation trajectories (depicted with light-blue dotted arrows); and A3 deceives the robots to act differently than the agreed plans.

My goal is to enable what I call an Internet of Resilient Robotic Teams (IoR2T), where robots will not only withstand attacks and failures (by being robust) but also will adapt and recover (by being, instead, resilient). Towards the IoR2T, I plan to wed tools for a robust autonomy with tools for a multi-robot distributed intelligence driven by adaptive learning and control capabilities.


In my past research, I worked on robotics and control towards a provably robust autonomy against deception and DoS attacks and failures, identifying fundamental limits, and contributing provably optimal algorithms. I have built on fundamental tools of automatic control, computational complexity, robotic perception, statistics, non-convex and combinatorial optimization. I validated experimentally my results on tasks of autonomous navigation for exploration, search and rescue, and surveillance.

1. Resource-aware learning and control

In a world of connected machines, from the Internet of Things (IoT) to the IoR2T, competitions for resources arise: from how to allocate the available power (when to activate what) to how to allocate the available communication and computation bandwidth (when to communicate and compute what). Hence, towards the IoR2T, the  first step is to address the necessity for robotic and, more broadly, cyber-physical  systems (CPS) that successfully operate despite power, communication, and computation constraints.

The first key contribution of my PhD was to identify first fundamental limits towards resource-constrained operation of CPS in terms of their learning and control capabilities, and to develop provably near-optimal algorithms to reach the limits.  To identify the limits, I employed tools of control and computational complexity theory. To develop the algorithms, I employed tools of combinatorial (discrete) optimization, proving properties of discrete convexity, namely, of submodularity.

Selected publications

2. Denial-of-service robust learning and control via combinatorial optimization

But the robots and CPS composing the IoT and IoR2T are vulnerable to denial-of-service (DoS) attacks and failures that can shut down subsets of the robots’ learning and control capabilities: sensors, actuators, communication channels, or even entire robots; e.g., see A1-A2 in Fig. 1. Although much research has focused against deceptive attacks and failures, little work has focused against DoS attacks and failures, and none has developed a generalized framework against any number of DoS attacks and failures.

Hardware for experimental validation

My PhD research contributed such a general framework providing the first provably near-optimal algorithms for robust combinatorial optimization against any number of DoS attacks. The algorithms not only enabled the DoS-robustification of the resource-aware designs in Section 1. They also enabled the first DoS-robust multi-robot planning methods for navigation tasks of information gathering, such as search and rescue (localizing people in burning buildings), and surveillance (tracking adversarial behavior) —see selected publications below. The validation approach used small-scale hardware experiments (Fig. 1), and large-scale simulator experiments in Gazebo. The algorithms also apply to discrete optimization problems in machine learning (data summarization, feature selection).

Selected publications

3. Deception robust perception

Besides DoS-robust designs and navigation plans, for an IoR2T future where robots will reliably navigate the world we also need reliable robotic navigational perception capabilities such as object recognition, scene reconstruction, and simultaneous localization and mapping (SLAM). The current algorithms supporting such capabilities are brittle to deceptive failures, namely, outliers, caused by sensor malfunctions or incorrect data associations; i.e., the outliers can be interpreted as deceptive attacks by nature (A3 in Fig. 1).

During my post-doc, my collaborators and I developed real-time outlier-robust algorithms with broad applicability to all perception problems above, employing combinatorial and non-convex optimization. The algorithms outperform the state of the art in all experimental validations, in terms of both outlier-robustness and running time. The validation approach used real-world datasets.

As such, our proposed algorithms promise to be valid replacements of the current algorithms for outlier-robust perception, including RANSAC, which is significantly slower and brittler to outliers, yet has been the standard algorithm for outlier-robust perception (and more broadly, for outlier-robust estimation in statistics, e.g., for learning and prediction) for the last 30 years.

Selected publications