Tool Alignment Machine Vision - Some Progress



  • I've put three new repositories on Github:

    First, every "install OpenCV" tutorial I can find is several pages long with twenty or thirty steps.

    Second, they all "virtualize" the install to make it possible to have multiple versions and configurations of OpenCV. This makes runtime hard, and we don't need it. We want a single install that "just works".

    It is NOT intended as a general interface; it is for 'lightweight' use in scripts, with the strong intent to abstract away as much complexity as possible. More info in the readme in the repository.

    This enables TAMV to run on a D3+Pi, or on any Pi that can reach a D2 or D3 printer via the network.

    This script is intended to allow Duet V2 and V3 tool-changing printers to align any number of tools that have a recognizable circular nozzle via full automation. It will ultimately generate the correct G10 commands and write them to the printer in a file that can be M98 included in config.g.



  • As of this moment 19 Mar 2020, the TAMV.py script does:

    • Prompt you for a name/address and connect to D2 or D3 printers.
    • Work with USB camera. (No PiCam) Logitech C270 HIGHLY recommended.
    • Prompt you to mount the first tool and jog it into view of the camera
    • Is completely automated from there:
      • It finds the nozzle (so far, very reliably, under a variety of lighting conditions)
      • Figures out the XY orientation of the carriage vs. the camera
      • Figures out movement directions
      • Centers the nozzle and obtains the printer coordinates of that centering.
      • Repeats this for every tool.

    I went ahead and posted it because there is quite a bit above that people could start testing.

    • A this moment it does NOT (YET)
      • Remove existing G10 offsets from the running printer.
      • Keep and calculate tool-to-tool coordinates for later math
      • Do that math!!
      • Produce G10 commands from that math
    • All of that will come in the near future.


  • For those who know from other threads that I struggled for a long time with OpenCVs Machine Vision circle recognizer... and couldn't get it tuned reliably enough to work for even a few hours... you may wonder what changed.

    I completely quit looking for circles. That is, took the OpenCV "houghCircle" function entirely out of the code.

    Now it looks for "blobs". Closed hulls. Once it has found all of those in an image, filter for 'circular' blobs. Meaning blobs that have the mathematical properties of having lots and lots of sides (not a triangle or a hexagon or a square/rectangle/parallelogram or etc.), of being a more "convex hull" (no inward curves or spikes), and having more "inertia" (math matrix term for being evenly round instead of long skinny elliptical) and so forth and so on. About six filters.

    Turns out this is MUCH faster, and I mean MUCH MUCH faster, than the hough algorithm. It also "fails faster" if there is nothing suitable in the image, and gives many fewer false positives.

    That is the breakthrough. Blobs.

    We will see how well this really works, as other people begin to test it.



  • Example console text terminal output from a run (against just one tool):

    pi@duet3:~/TAMV $ ./TAMV.py 
    Please standby, loading libraries; some of them are very large.
    Please standby, attempting to connect to printer...
    Connected to a Duet V3 printer at http://127.0.0.1
    #################################################################################
    # 1) Using Deut Web, mount tool zero                                            #
    # 2) Using Duet Web, jog that tool until it appears in the camera view window.  #
    # 3) Using Duet Web, very roughly center the tool in the window.                #
    # 4) Click back in this script window, and press Ctrl+C                         #
    #################################################################################
    
    ^C
    Please standby, initailizing machine vision...
    Initiating a small X move to calibrate camera to carriage rotation.
    Camera to carriage movement axis incompatiabile... will rotate image and calibrate again.
    Initiating a small X move to calibrate camera to carriage rotation.
    Found X movement via rotation, will now calibrate camera to carriage direction.
    Found Center of Image at printer coordinates  {'X': 313.875, 'Y': 361.54, 'Z': 6, 'U': 0}
    {'X': 313.875, 'Y': 361.54, 'Z': 6, 'U': 0}
    

    Typical view through the monitoring window on the graphics console:

    61bb05ef-22e2-4eb7-88d2-269a863316fa-image.png



  • And, a couple of important tips:

    1) You may have to clean the nozzles. I spent several hours thinking I broke the code before I noticed a blob of dark material on a nozzle...

    This did result in an enhancement where it will tell you, onscreen, if it is finding zero, or too many, circles.

    2) Run the blue sock. It covers a bolt that will get recognized if uncovered, and completely mess this up.



  • Several commits to github in the last 24 hours, quite a bit of them tuning the recognizer.

    In addition, most of the "not yet" list is now implemented (20 Mar 2020). The italics below are all a "yes" as of this moment.

    • Prompt you for a name/address and connect to D2 or D3 printers.
    • Work with USB camera. (No PiCam) Logitech C270 HIGHLY recommended.
    • Prompt you to mount the first tool and jog it into view of the camera
    • Is completely automated from there:
      • It finds the nozzle (so far, very reliably, under a variety of lighting conditions)
      • Figures out the XY orientation of the carriage vs. the camera
      • Figures out movement directions
      • Centers the nozzle and obtains the printer coordinates of that centering.
      • Repeats this for every tool.
      • Remove existing G10 offsets from the running printer.
      • Keep and calculate tool-to-tool coordinates for later math
      • Do that math!!
      • Produce G10 commands from that math


  • @Danal

    This is awesome work! Quick question though, with your machine vision script, have you thought of using reference images? Basically, have an image of a nozzle that would roughly match what the camera should see, and then compare the two images. I'm not sure if that would help with the location of the center of the nozzle per se, but possibly let you know if the nozzle is too dirty for the script to work correctly, as you mentioned that a blob caused issues with shape recognition, so maybe also have a few reference images of dirty/burnt nozzles.

    Only mentioning the reference images because that is essentially how you "train" FANUC and other industrial robot machine vision pick and place systems, at least the two I have seen. A reference image is taken, some relevant geometry is highlighted to be "important" and that is what the algorithm looks for.



  • @Red-Sand-Robot said in Tool Alignment Machine Vision - Some Progress:

    @Danal

    Only mentioning the reference images because that is essentially how you "train" FANUC and other industrial robot machine vision pick and place systems, at least the two I have seen. A reference image is taken, some relevant geometry is highlighted to be "important" and that is what the algorithm looks for.

    You are correct that some machine vision requires training. This does not, and therefore cannot benefit from it.

    What it does do is tell the human if it sees too many circles (and where they are), or too few.

    Fixing too many is a matter of getting them out of view. Example: I have a nozzle where the insulation on the heater wire happened to be faced downward rather nicely, forming a circle. Pinch it, put tape over it, or similar.

    Too few circles? Clean the nozzle.

    Maybe (and really, maybe) change the lighting. I have gotten the recognizer to the point it works for me in my room at night lit with ceiling cans, and during the day, lit with a wall full of sunny windows. I can even wrap my hands around the camera while it is recognizing, which dramatically changes the lighting, and it deals with that properly. One alpha tester has his printer in a totally dark room; the nozzle is only illuminated by the green LED of a caseless C270 camera, off to one side. Still works.

    Anyway, thanks for the idea, however it is N/A to this particular setup.



  • Quite a few commits in the last few days. Alpha testers are reporting success.


Log in to reply