What is reality? Some doctors say if you perceive it to be real your body will fill in the blanks to make it a reality.
By now we have all seen this. Unfortunately, it’s not completely true and makes many assumptions, there is some science behind this. Our brains interpret what it “expects” to see. (1)
Virtual reality is similar in some ways, we are accepting that this virtual world has some omitted items but, it doesn’t take away from the experience. My Dad used to get sick at IMAX movies because it was immersive and gave you the impression you were flying or driving, etc. To him, perception was reality!
Virtual reality can give us the same interaction with the Internet of Things (IoT). Experience! When we experience things we react. Our pupils dilate, our muscles contract, our breathing and heart rate changes. Based on these feedback mechanisms we can read back a person’s sensations to a virtual experience.
Using sensors we can tell how someone is reacting. This is expanding on how the VR / AR experience is improving. The newest version is not a display only, it also tracks our eye movement and the reaction to what we see. This helps programmers and Artificial Intelligence systems to recognize that their representation on the screen is confusing or misleading. If the user is looking to the right but the direction should be taking them to the left, we know the graphics need to be corrected. We are using the IoT feedback through the self-facing camera to tell us the individual’s reaction. If the instructions become confusing we can sense through expressions and pupil reactions that the user is becoming frustrated. We can also position them using GPS and the forward-facing camera, where the frustration is occurring.
The programs are self-evolving now to determine that what is being transmitted via the cloud to the user is not producing the desired response. If a technician is using AR goggles to walk them through a repair using tags on a piece of equipment along with global positioning, then we can tell if the AR tags are in the right locations or if they are getting lost. The feedback from the sensors tell repetitive actions because they are walking through the steps numerous times.
With Augmented Reality being incorporated for so much more technical work, it just makes sense you would want to close the loop using IoT and sensors for feedback and Artificial Intelligence to make the system more immersive and user friendly. The AR can help walk the user through an activity. Whether you are using the technology to speed up bin-picking or giving directions for assembly. The AR using tags on the equipment for locating speeds up the process and also add an element of error proofing by integrating the forward-facing camera to verify that where something is being placed is confirmed. The IoT loop verifies the activity and using the AI to learn from errors as to why the instructions or indicators were confusing really improves the instruction set for all tasks.
Assembly and warehouse systems are improving their efficiency and accuracy using tools like google glass being integrated into supply chain systems that work to improve the bottom line by reducing mistakes and speeding up the process. (2) Warehouse service companies are training with this equipment currently and while it doesn’t solve every challenge, it does benefit a lot of the applications.
Some of the systems such as pick-by-vision (3) are looking to show metrics that benefit the company in the two main factors, accuracy and speed. The systems are of course dependent on the inventory management systems and the warehouse location system into which you will integrate. The pick-by-vision systems are using augmented reality which allows the actual vision of the individual, will be the base for which all the other information systems will overlay. It overlays mapping information, directions, photos of the item to be picked, and bar codes to be compared. The great part of the systems is the can be real-time and improved through AI.
Much of the success and failure of these systems comes through the quality of the vision system and the cameras. Of course the higher the quality, the more data and slower the response time can be unless your complete system is upgraded to utilize the data from the camera. This is also why many of the systems opt for cheaper GPS technology and scanning tags. These can allow for lower resolution but the trade-off is the finite analysis the higher quality vision systems will offer. The displays don’t have to just be glasses style, they can be a smartphone or tablet but then you take away the hands-free advantage. The image is scanned and compared against a database and orientation is also determined in order to factor in directions and mapping. As you approach your location the more detail you receive.
When using a slower system or an interface that has a lot of detail some issues can occur such as the person using the device may walk past the location because the feedback took too long. This is a problem and one you need to address early on in order to make sure your system is robust enough to handle the interface.
The vision systems give you the advantage of recognizing and comparing location based on visual cues. The simpler versions use tags similar to the QR tags. These scale the distance based on a known location and size of the tag. This can be confusing though as the angular distortion and change of position can take away some of the accuracy.
Determining the picking order or assembly order is where most of these tools really excel and make these tools more than a glorified laptop, tablet, or smartphone; Hi-tech programs that take information from IoT and then use the positioning data to determine the optimized flow really capitalizes on the tools system but next they can use AI to learn the best flow sequence.
The key for much of these systems is that you need the feedback loop from the IoT, sensors, or tags. This must be communicated elsewhere to compare and feedback with information. The glasses don’t have enough on board to act alone. In warehouse systems the keys mainly do you pick the right part and how long did it take? In assembly systems, these AR and VR tools can integrate to produce an obstruction that isn’t there but is anticipated to be installed. This way you can run analysis checks and ergonomic assessments prior to installing the equipment that will be causing the obstruction.
These tools can really benefit from training. VR creates an environment similar to the one the trainee will be working on without investing millions in equipment and in training machines. This can greatly reduce the learning curve. These digital twins allow you to train but more importantly, create a duplicate environment to simulate conditions without the cost or the engineering levels. Many companies are producing these VR simulations prior to final assembly and testing. This allows the final systems to be tweaked before a hard tool is produced that can take weeks or months and cost thousands of dollars.
The key to the success of AR and VR in your plant will be integration with your IoT and by extension your AI analysis system. If our systems don’t get smarter we will be spinning our wheels for a long time. Think in terms that the AR and VR is the train riding on IoT rails and AI is the conductor… All aboard!
The post Smart Industry and Supply Chain using VR / AR integrated with IoT appeared first on IoT Business News.
via https://www.aiupnow.com
by IoT.Business.News, Khareem Sudlow