It used to be too expensive and too slow to suit many industrial applications, but now there are technical and market reasons why it could be time to think again about 3D machine vision, says Michelle Knott.
“Neither the technology was able to support the speeds required in production nor the industrial budgets,” says head of international sales for Photonfocus, Sergio Manuel, when discussing the drawbacks of 3D in the past. However, he adds that suppliers have made great strides in terms of both speed and cost, thanks to a range of technological innovations.
In fact, there are three parallel strands of technological development that have come together to brighten the prospects for 3D, according to Allan Anderson, chairman of the UK Industrial Vision Association (UKIVA) and managing director of Clearview Imaging: “3D generates a lot more data than 2D, and when you’re trying to run at line speeds it’s more difficult.
“So the first thing is the increase in general computing power. The second thing is software, because you need the tools to capture the 3D data and do something with it. Whether you’re doing pick-and-place robotics or inspecting cookies in a box, you need software to be able to do what you want to do and that’s come on leaps and bounds. The cameras are the other thing. There have been more cameras available and they’ve come down in price and are easier to install.”
There are also market factors that are increasing the interest in 3D, with growing interest in robotics being the biggest driver, according to Anderson.
Whether you’re doing pick-and-place robotics or inspecting cookies in a box, you need software to be able to do what you want to do and that’s come on leaps and bounds
Allan Anderson, chairman of UKIVA & managing director of Clearview Imaging
In addition, there has always been a pent up need, because there are some applications where 2D really isn’t up to the job. “If you’ve got cookies stacked in a box and you’re looking down on it with 2D, can you sport the missing cookie?” he says. With 2D it’s also not much help if you’re looking to assess volume, whether you’re using it to check the thickness of icing on a cake or the number of boxes on a pallet, rather than relying on weight all the time.
But there is still one remaining obstacle to the wider take-up of 3D imaging, and that’s a significant knowledge gap. Part of the issue is that 3D isn’t a single technology. When it comes to robust, factory-ready systems there are four main contenders: laser triangulation, stereo vision, time-of-flight and fringe pattern projection. There are other 3D technologies (Manuel adds photometric stereo to his list, for example), but those four are the most common in an industrial setting.
“All the technologies are there now and the costs have come down, but there’s still a knowledge gap. So it’s not as easy as saying ‘I’m doing X so the technology I need to use is Y’,” says Anderson. Clearview offers all four types of system and he says that this enables the company to offer impartial advice about the most appropriate approach.
There are several different sets of pros and cons to consider. The first is whether the technologies are ‘passive’ or ‘active’.
Active technologies emit light and measure what happens when it bounces back. Laser triangulation, fringe pattern projection and time of flight are active, while stereo vision works more like the human eye and is passive. “That means that laser, fringe and time-of-flight are really good when working from close by, whereas stereo vision is good when objects are further away, because it hasn’t got to throw light out and get it back,” says Anderson.
But there is still one remaining obstacle to the wider take-up of 3D imaging, and that’s a significant knowledge gap. Part of the issue is that 3D isn’t a single technology
Next there’s the complexity of the mechanical set up. Laser profiling is essentially building up a picture of the object by capturing a series of slices as the object moves through the path of the light. This makes it a good option for scanning objects or materials as they move on a belt, but if they are stationary, the laser and detector have to move across them in perfect synch instead, resulting in a more complex arrangement. Structured light (aka fringe) presents the opposite issue, because it requires objects to be stationary. Stereo vision and time-of-flight can work either way, so they don’t present the same mechanical constraints.
Then there’s accuracy, which is not an issue when checking how many boxes are on a pallet but can be critical when checking for things such as fine surface defects in glass production, for example.
“The two most accurate are laser profiling and fringe projection, which both offer micrometre-scale accuracy,” says Anderson. “Time-of-flight and stereo vision are more like millimetre accuracy.”
“There are some really hard points [where there’s a stand-out leader to suit a particular application], but in some applications you can use different techniques at your convenience,” says Manuel. He agrees that laser profiling is the most popular choice in the moving belt scenario, for instance, but adds that it’s far less clear cut in applications such as bin picking.
Photonfocus is another company that offers a range of 3D technologies. “The accuracy of our 3D cameras for laser triangulation are very well-known in the market but also the special features of Photonfocus sensors are very good for 3D applications like fringe projection or stereo vision,” Manuel says.
There’s no clear winner in terms of cost, according to Anderson, because that depends to a large extent on how much computing power you need: “The more accuracy you need, the more 3D data you need so the more computational power you need.”
HD camera targets tricky picking
RARUK Automation introduced its Pick-it 3D Robot Vision system last year and the company has now announced a top-of-the range addition. The Pick-it M-HD high definition 3D camera is designed to detect almost any small and medium size objects, made from any material, with even higher accuracy.
Like all of RARUK’s Pick-it cameras, the new model uses structured light to calculate the 3D images.
According to the company, Pick-it allows any camera-supported automation application to be built without expert help and without the need for complicated programming.
Instead, Pick-it guides the robot to see, pick and place products from bins or other locations to wherever they’re needed, such as an assembly line or conveyor belt.
According to the company, Pick-it allows any camera-supported automation application to be built without expert help and without the need for complicated programming
Simply show an example part to the plug-and-play camera, save this into the teach detection engine, tell Pick-it where to look with a click and drag tool and Pick-it will guide the robot to the nearest pickable part. A typical detection cycle takes less than a second and Pick-it can find multiple parts in one cycle.
The system can also be connected to the internet for remote monitoring, extending Pick-it’s potential for ‘lights out’ operation and integration into a smart factory environment. And, since Pick-it can find parts in any location and layout, there is no need for a separate feeding line.
The combination of the Pick-it 2.0 3D picking software and the new M-HD camera is designed for tricky applications that involve picking small parts with a very high degree of accuracy. RARUK Automation says its systems are now 30 times more accurate.