Skip to main content

Robots can Perform Daily Tasks

robot performing real world tasks
Robot by Lukas

 

 Is this the ending or the beginning?


Robots are the future and they will replace and recreate anything that comes in their way.


As the world continues to evolve, so do the ways robots interact with the world. It does feel like a nice idea if our robot could help us in the day-to-day activities while at the same time adapt to different surroundings.


The present world does have some robots which can perform chores but I want you to envision a robot that would respond to your commands and help you in multiple chores without complaining.


To help us out in such a way robots must perceive their surroundings as we humans do. A clear mental image of the environment around them is a necessity for the robots to perceive the world as it is.


The particular function of perceiving their surroundings is particularly a hard one for robots where the pixels have to be transformed into a real-life understanding of the world.


What has been proposed?


A new model called- 3D Dynamic Scene Graphs is being developed which gives robots the ability to quickly generate a 3D map of its surroundings including objects with semantic labels (for example, a chair vs a table) along with people, rooms, walls and other structures in the surroundings.


The model also helps the robot to extract information from the 3D map, allowing us to query the location of objects and rooms along with the movement of people in its path.


The compressed representation of the surroundings helps the robot to make quick decisions and plan its path. This is somewhat resembling humans who plan their path from Point A to Point B by thinking about the streets and landmarks rather than every single position.


How does the technology work?


At present, robotic vision and navigation have forayed along two routes; real-time 3D mapping enabling robots to reconstruct their environment and semantic segmentation which lets robots classify features as semantic objects mainly done on 2D images.


The new model of spatial perception works on both real-time 3D imagining of the surroundings while labeling objects in the surrounding environment.


Kimera- an open-source library is the key component of this model which was developed to construct a 3D environment while attempting to label whether an object is a chair or desk. In short, kimera is a mix of 3D imaging and semantic tagging.


To generate a 3D mesh kimera uses an existing neural network which has been trained on millions of real-world images, to accurately predict the label of each pixel projecting the label in 3D using a technique known as ray-casting used for real-time rendering of animations in computers.


This ultimately results in a color-coded 3D map of the robot’s environment.

This technique alone would be computationally expensive and time-consuming. 


To encounter this problem researcher’s built off kimera algorithms to construct 3D scene graphs form kimera’s initial map.


A scene graph is a general data structure commonly used by vector-based graphics editing applications and modern computer games, which arranges the logical and often spatial representation of a graphical scene.


In this particular case of the scene graphs, the algorithm breaks down kimera’s 3D mesh into distant semantic layers so that the robot can see through a scene of a particular layer. This layered representation saves the bot from having to make sense of billions of points and faces.



What does this mean for the future?


 The technology called spatial AI is in its infancy but promises to bring along with itself exciting innovations in fields such as self-driving cars, rescue missions, manufacturing, and domestic robots.


 It can also be used in wearables such as AR goggles which can be asked questions such as “Where is the nearest exit”, “Where did I leave my car keys?” etcetera.


Our take on this


Technology is always exciting and so is the thought of having human-like robots that will be able to perform multiple tasks for us. We are living in an era which is witnessing the revolution of the world technologically and it is only times when humans achieve a feat to move over to the next. It is exciting to look at the prospective future whilst scaring at the same time to imagine what humans will look and behave like in the few years. 


There is no way to predict the future than to create it and live through it.


Bibliography:

https://roboticsconference.org/program/papers/79/

https://www.sciencedaily.com/releases/2020/07/200715131222.htm

Comments

  1. Robots are used as a flexible way to automate a physical task or process. Collaborative robots are designed to carry out the task in the same way a human would. More traditional industrial robots tend to carry out the task more efficiently than a human would.

    Robots and automation aren’t our future- they are the present and future, both.

    ReplyDelete

Post a Comment

Popular posts from this blog

9 Techniques to Write Your Code Efficiently

(Photo by Oskar Yildiz on Unsplash ) It’s really easy to write efficient and faster code . Efficient code, not just only improves the functionality of the code but it can also reduce the time and space complexity of the programming. Speed is one of the major factors in deciding the quality of the code , for instance, your code might be producing the required result but it takes some time to execute then it will not be considered a quality code. An alternative approach to the same problem producing faster results will be considered better. The code should be clean i.e. comprehensible and readable so that it can be reused (saving the efforts of rewriting the whole program from scratch), adding new features, and making the process of debugging more easier. In this article, I will cover some simple tips and techniques which we can easily apply to make our code more elegant and efficient. "There is always more than one method to solve the problem." How to write code efficie...

Everything You Need to Know About Google Foobar Challenge

Recently, while searching a keyword “headless chrome” on Google I got an unusual pop-up on my window, with a message: "Curious developers are known to seek interesting problems. Solve one from Google?" I was surprised to see Google sending me a challenge to solve and I accepted it immediately! Clicking on “I want to play” landed me on Google’s Foobar page. It was Google Foobar Challenge! What exactly is Google Foobar Challenge? Google Foobar challenge is a secret hiring process by the company to recruit top programmers and developers around the world. And it is known that several developers at Google are hired by this process. The challenge consists of five levels with a total of nine questions , with the level of difficulty increasing at each level. What to do after getting the challenge? After selecting “I want to play” option you land on Foobar’s website which has a Unix-like shell interface, including some standard Unix commands like help, cd, ls, cat and etcetera . ...

Cheatsheet for NumPy: Essential and Lesser-Known Functions

(Photo by  Chris Liverani  on  Unsplash ) Numpy  ( stands for — Numerical Python ) is a library available in Python programming language, supporting matrix data structures and multidimensional array objects. This the most basic scientific computing library that we need to learn, to begin our journey in the field of data science. Numpy can compute  basic mathematical calculations  to make the process of creating advanced machine learning and artificial intelligence applications easier (by using comprehensive mathematical functions available within the library). Numpy allows us to  carry out various complex mathematical calculations effortlessly  along with several top-up libraries (like matplotlib, pandas, scikit-learn, etc.) built over it. This library is a great tool for every data science professional to  handle and analyze the data efficiently . Moreover, it is much easier to perform mathematical operations with numpy arrays in comparison ...

Followers