WELDING WITH ROBOT VISION
By Heath Suraba
Lighting is vastly important and should not be left to chance. Many times band pass filters are used on the camera lens to allow only certain frequencies of light into the lens. This limits the possibility of unwanted influences from unexpected light sources such as skylights.
Have you ever considered how well our eyes adapt to a wide range of light? Vision systems, on the other hand, can adjust for changing light conditions, but not as dynamically as our human eyes. They use electronic auto exposure and manual exposure settings. The auto exposure setting acts as our eyes do by keeping the lighting through the lens at a consistent level.
The other exposure setting is mechanical (F-stop) and is usually located on the lens assembly. A good rule of thumb for using the manual exposure setting is to dial in the mechanical exposure (F-stop) to the middle of the auto electronic exposure setting range. This allows the auto exposure to act like a shock absorber system on a car giving maximum travel up and down as the light level increases and decreases.
Say a crane passes over head and blocks the ceiling lights, for instance, causing your vision image to go dark. Not good, right? By using the auto exposure, it would kick in and automatically brighten the image. Peripheral lights are often installed on the robot arm along with the camera and controlled digitally through the robot to insure there is always ample light available when needed.
Remember that with vision systems, it is all about creating consistent lighting.
CALCULATED ADAPTIVE FILL
It is a great thing to be able to statically track and adapt to a weld joint that is moving using vision. Do you know that it is possible to track the weld joint and change weld procedures on the fly based on the size of the weld gaps? We call this Calculated Adaptive Fill.
By counting the pixels spanning the weld gap width and knowing the length of each pixel, it is easy to know how big the weld gap is with great accuracy. The measurement is done very much the same way as using a pair of handheld calipers. With vision, the measurement is made electronically. Knowing the gap size allows you to calculate the appropriate change in welding procedures such as wire feed speed, travel speed, and weave amplitude just to name a few. These procedure changes produce consistent results even when the gap is varying and the position of that gap is moving.
Because vision software is fully integrated into the robot, the ability to ramp these values up and down as the gap varies is possible and very cool to see. For example, across 18 in of weld, the gap at the beginning of the weld is tight then opens up to 3.0 mm half way. Then the gap closes to down to .5 mm at the end.
Vision can see all the variation and fill the gap as is for that particular part. By taking multiple pictures along the weld joint in known problem areas, the weld can “adaptively” be filled without issue as long as your robot and welding power source are capable of Calculated Adaptive Fill. Also, it is recommended that every production facility planning on using vision should develop at least one person per shift as a “Vision Champion.”
Welding with robot vision is easy to use, especially when it is fully integrated into the robot controller. Vision depends on lighting consistency, and you can leverage success by taking time up front to make your process as robust as possible, such as painting your fixtures black to create contrast behind your weld assemblies.
While vision will take some time to learn, its long lasting benefits are well worth the effort.
Pages: 1 2
About the Author: Heath Suraba is an automation application technologist in the Automation Division of The Lincoln Electric Company, 22801 Saint Clair Avenue, Cleveland, OH 44117-1199, 216-481-8100, Fax: 216-486-1751, email@example.com, www.lincolnelectric.com.