TOTO, a sleek telepresence robot developed in collaboration with IIT Delhi’s I-Hub Foundation, showcases the innovation of SeiAnmai Technologies. SriKrishna, the visionary behind this research-ready creation, shares his insights with EFY’s Yashasvini Razdan.
Q. What are you building with TOTO?
A. TOTO is an acronym for ‘tele observance and tele operation,’ a telepresence robot platform. We specialise in telepresence systems, aiming to recreate the experience of being present in a remote location. The goal is to replicate your senses, allowing you to see and hear what is happening elsewhere and engage with people in that location. Our initial product focuses on tele-operation and tele-observance robots. Users can log into these robots and operate them remotely over the internet, utilising the robot’s camera, microphone, and speaker for interaction.
Our mission is to address the challenges associated with physical presence, such as the expenses and time involved in travel. We recognise the inefficiencies and inconveniences, especially for those in upper management or involved in inspections. The telepresence solution aims to provide an efficient alternative, allowing users to be virtually present without the need for extensive travel.
In comparison to traditional video calls, where the view is limited to what the other person shows, our telepresence robots offer full control. Users can choose the desired angles, observe their surroundings, and navigate the robot as needed. This eliminates the challenges of directing someone else to show the correct angle during a video call. With telepresence, users have complete autonomy, removing the reliance on others for transportation and enabling them to move around freely with the robot.
Q. Are telepresence and teleobservance the only intended functions of this robot?
A. In robotics, the term ‘manipulation’ refers to the ability to move or interact with objects in the surrounding environment, typically associated with a robotic arm or manipulator. We have two basic types of robots: a mobile robot, which moves around, and a manipulator, which interacts with objects. The goal is a combination of both—a mobile and a manipulator, which is a robot that moves around and manipulates objects in its surroundings. This aligns with the telepresence feature, where telepresence involves creating a virtual presence elsewhere, and manipulation allows the remote control of objects.
Q. Where is the data collected by the robot stored? And what do you do with it?
A. Our architecture allows for various approaches. Currently, we can either redirect through our servers or establish a peer-to-peer connection. With the peer-to-peer connection, your computer directly receives data from the robot similar to video calls. We use this technology for the robot, sending not only video but also additional metadata for a more comprehensive user interface. In the peer-to-peer setup, we are not collecting any data from the robot; the server just handles the signalling. The user can opt-in to give access to some metadata to allow for bug identification and future improvements.
Q. What is the area that the robot can monitor?
A. I can navigate around the floor area from here. Our robot has a two-wheeled design tailored for flat surfaces. For multi-floor, we plan to have one robot on each floor, allowing users to log in and move seamlessly between floors, utilising the robot on each level as needed.
Q. What are its application areas?
A. The primary areas where our technology can be applied include hospitals, especially in ICU wards and scenarios involving infectious diseases. For example, if someone needs to observe a situation in the ICU remotely, they can log in and utilise the robot. In cases where someone is not accompanied, a message can be sent through the hospital, facilitating a connection to the robot for remote observation. Another application is in locations requiring inspections, such as factories and warehouses, where access may be restricted. The robot can navigate these environments, providing observation without compromising cleanliness. Similarly, in places like cold storage, where human endurance may be challenging, the robot can perform tasks more effectively. These applications we aim to address, and we plan to work on daily manipulations, involving the use of robotic arms to interact with objects in an environment, enabling tasks to be performed remotely.
Q. Have you deployed it in any of the above use cases?
A. Currently, TOTORE, the Research Edition, has been built for educators and researchers working in the fields like simultaneous localisation and mapping (SLAM), mobile robotics, swarm robotics. It serves as the platform for our software stack and communication network designed for research purposes. We are preparing for a launch event for our upcoming robot, which has not been revealed. The current product in our lineup focuses on the tele-operation and tele-observance.
Q. What are the capabilities of the research edition of this robot?
A. The research edition is entirely open-sourced, with both the design and software accessible for users to modify as needed. All components used are off-the-shelf, simplifying the purchasing and replacement process. The robot has demonstrated reliability, undergoing extensive testing with various test cases, including our own extensive use during the development of our software stack.
The product has the capability to map the entire floor plan of an area. Users can select any point on the floor plan, and the robot will autonomously navigate there while avoiding obstacles. It provides a video feed along with a camera, microphone, and speaker for interaction. If users wish to implement computer vision applications, they can use the video feed for that purpose. The robot’s top is open, serving as a blank canvas for users to attach a robotic arm, sensor, or any mechanism relevant to their research. Essentially, this robot functions as a mobile visual medium.
Safety is paramount, especially in remote robot control scenarios. Even with manual control, there is a risk of collisions. We have implemented a safety system to ensure secure operation, preventing any damage to the surroundings or the robot itself. This safety feature is integral to the system’s functionality.
Q. What components or technologies have you brought together to make this robot?
A. This research robot is designed to be open source, ensuring accessibility for a wide range of users. We opted for the RasPi 4, a widely adopted and standardised board to address the research requirements. The platform is running on a Linux-based system. Additionally, we implement the robot operating system (ROS), an industry-standard open-source software system, specifically for the robot’s research edition.
Q. Why did you choose to use a Raspberry Pi model instead of any other development Board?
A. We used RasPi because of its vertical software ecosystem within the open-source community, allowing extensive project integration of functionalities.
Q. Can an industrial robot run on Raspberry Pi?
A. Certainly, I do not see any reason why it cannot be integrated into an industrial system. However, our current intention is to keep the industrial version proprietary for security reasons, as it plays as a major role in communication infrastructure. While it may not have the same form factor as the RasPi, but the latter offers the flexibility to integrate into various systems, including potential industrial application.
Q. What is the technology employed for navigation?
A. Our research robot currently incorporates a LIDAR, a reliable testing tool. Our hardware is flexible, allowing for a switch to an RGBD camera or exploration of visual odometry. However, our primary device for navigation is the LIDAR, effectively mapping the environment and detecting obstacles. A camera is utilised as a complementary sensor to observe the surroundings. We also have a front-facing sensor, to address limitations in the LIDAR’s perspective, ensuring the robot can detect small ledges or steps. This feature prevents the robot from unintentionally falling off edges. We aim to keep the sensor setup simple to facilitate operation in various environments, focusing on functionality with minimal complexity, increased development efforts, and higher costs.
Q. What kind of battery does it use?
A. We use a LiPo (lithium polymer) battery, a widely adopted technology in smartphones and laptops.
Q. Is there any special sort of charging mechanism for this so you can plug it in anywhere?
A. We designed the robot with the capability to autonomously navigate to the charging dock and dock itself for recharging. The charging station connects to the mains power supply, enabling it to charge the robot’s LiPo battery. The robot can run for approximately six hours, and once depleted, it autonomously returns to the charging dock. The charging process takes about an hour, allowing for almost continuous use throughout the day and it can also be charged overnight.
Q. Where are you sourcing your components from?
A. Currently, we are just buying it online. We do not restrict ourselves to a single e-commerce distributor. It depends on the availability and pricing offered by the seller.
Q. Who is the target audience for this research edition robot?
A. Currently, our target audience includes academia and learners. The robot serves as an open platform, allowing users to disassemble it, examine its components, and customise it as needed. Additionally, individuals interested in learning robotics can benefit from the robot, as it is available as separate parts for self-assembly. This provides an educational exercise for students and other learners in robotics.
Q. How do you intend to commercialise it? How do you intend for it to reach the market?
A. This robot targeting individuals working in mobile robotics or related fields. For those involved in applications like SLAM technologies, multi-robot systems, swarm robotics, or any area requiring testing algorithms with robots, our open-source robot provides a valuable platform. We approach people in these adjacent fields, offering them the flexibility to modify and utilise the robot for their specific needs.
While our primary focus is on telepresence, this robot emerged from our efforts in that direction. since the robot is open-source, we hope that the developments and modifications made by users will also be open-source, facilitating and simplifying the collaborative development process.
Q. Do you plan to sell it as a DIY kit?
A. We are likely to sell complete robots, but we will provide access to our software and designs so that individuals can create their own robots. For those who acquire a robot from us, we plan to offer services, tutorials, and guides on how to use the robot and our software stack, fostering collaboration and compatibility while leveraging the collective work and data within our ecosystem.
Q. Is developing the software stack your main focus right now? What does it entail?
A. Yes, that is correct. We have developed our own modified navigation system that prioritises safe navigation, ensuring the robot avoids collisions. All safety features are crucial for the telepresence system, making it easier for users to operate without the fear of collisions. These built-in safety features are valuable for any robotic application, providing essential feedback to avoid obstacles. Given that our system is open, users have the flexibility to build upon and enhance these features. The system’s openness also allows for the integration of various sensors, providing users the opportunity to experiment and innovate. Our core aim is to empower individuals engaging in research or software development in robotics, encouraging creativity and exploration.
Q. What license are you using to open source this technology?
A. I am inclined towards the GNU open-source model, finding it to be a better approach. though the final decision will be made on the release day.
Q. What if others create a better market-ready product based on your existing stack?
A. I believe that if someone is using our product, they will always have the capability to do it. It is a free market, and I do not mind as we already have the head start.
Q. Where did you build this product? Did you just design it and outsource the manufacturing?
A. The design and development work was conducted in our facility during our incubation at I-hub foundation for cobotics (IHFC), technology innovation hub (TIH) of IIT Delhi. While some parts, like silicon chips, were purchased since they cannot be produced in-house, the majority of the design was done by me, and the assembly took place here as well. That sums up what has been accomplished so far.
Q. Are you looking for any partners whom you can collaborate with?
A. We are not a large team yet; our hope is to form partnerships with the right people to move forward. Currently, we have developed the technology, we are still in research and development and are optimistic about finding good partners willing to collaborate with us and take our efforts further.
Q. So, what kind of partners would you be looking at based on your current status?
A. For manufacturing, we definitely need to find a partner since our small team cannot handle the scale of physically building products. Collaboration is essential for deploying to critical use cases and connections with event organisers become crucial.
Q. Have you received any support from other entities apart from IHFC?
A. I had an opportunity to work with Prof. S.K. Saha at IIT Delhi on an indigenous wheeled mobile robot platform, Robomuse 4, as part of my engineering internship. Taking inspiration from this, I later built TOTO as an intern in the IHFC READY Programme. Incubated at IHFC, we have access to space and financial support while the earlier support came via the research entrepreneurship and development for youth (READY) programme, we have now been formally incubated for the last two plus years. We have five people working with us, in capacity as interns. As a startup, SeiAnmai tech’s journey continues in robotics and deep tech.
Q. What is the IHFC-READY programme?
A. IHFC has been established in partnership with the Department of Science Technology (DST), Ministry of Science and Technology, Govt. of India under national mission on interdisciplinary cyber physical systems (NM-ICPS). The focus of IHFC is ‘Cobotics’ which is a blend word for collaborative robotics.
IHFC-READY (research entrepreneurship development for youth) programme is a six-month pre-seed incubation programme for final-year students or recent graduates. It offers financial support to develop their projects into products, reimbursing and providing a stipend for the participant. During these six months, individuals have the opportunity to work on their projects. Afterwards, they can explore options such as technology transfer or continue with the project if it shows promise
Q. What are your future plans, milestones or targets for this robot?
A. Our initial target is to deploy these robots in event-like settings primarily at conferences and events. This serves as a form of marketing for the robot and a solution for individuals who cannot attend the event in person. The initial goal is to gather usage data in these scenarios to identify areas for improvement before deploying the robot in more critical use cases including medical centres or hospitals. This allows remote attendees to log in, look around, and interact with people as if they were physically present.