Where is the difference - 10 times X1 vs 1 times X10
-
The slicer is about generating paths. In my opinion the machine should deal with as much machine specific stuff as possible. I would say gold standard is to be able to take a build file and run it on any machine. This of course means slicing and path generation on the machine which would of course allow much better control over segmentation.
This is a good use case for a python parsing script running on the single board computer as there are also issues from making the vectors/segmentation too large. This is a machine level problem that needs to be delt with as close to the coal face as possible, as that is where the information on instant speed, junction deviation, acceleration or intergrals/derivatives thereof reside.
@deckingman many perfectly straight vectors are unlikely but I've seen plenty of instances where contours around a curved feature have been broken down into hundreds, if not thousands of submicron length vectors with only a tiny angle between them.
-
@DocTrucker said in Where is the difference - 10 times X1 vs 1 times X10:
@deckingman many perfectly straight vectors are unlikely but I've seen plenty of instances where contours around a curved feature have been broken down into hundreds, if not thousands of submicron length vectors with only a tiny angle between them.
Most slicers are aware of the limitations of GCode throughput, which is of course worse on older electronics and much worse when printing over USB on older electronics; so they have a minimum length of output segment. If you produce a curved object (e.g. cylinder) with very tiny segments in the STL file, the slicer will attempt to combine segments until the minimum segment length is reached.
If users hit the GCode throughput limit on real prints, then I'm willing to look at improving it.
-
@dc42 that's logical, you need not fix a 'problem' until it presents itself in a real world example. I had plenty of cases presented to me that were obscure and twisted scenarios to demonstrate 'a serious problem' which were nothing of the sort. More a weakness under certain circumstances that operators needed to be aware of until the point where all the bigger issues had been resolved and the less frequent issues could be tackled.
I do think this sort of problem is best dealt with on the computer - rather than the controller - as there is no need for this to be done real time.
-
I wonder how klipper would handle this? In theory it should be able to use a much longer gcode queue. Similarly I suppose the the SBC version of RRF could in theory pre-process the gcode inside of dsf to merge the line segments. Whether it is worth it or not is of course another question.
On a related note, is there a description anywhere of what processing of gcode is performed by dsf (if any)?
-
On Duet 3 we have enough RAM to use a longer queue too. However, we limit the number of moves in the queue to 2 seconds of moves + 1 move, to prevent pauses being delayed too much in the event that we can't schedule a pause between moves already in the queue.
-
@dc42 said in Where is the difference - 10 times X1 vs 1 times X10:
If users hit the GCode throughput limit on real prints, then I'm willing to look at improving it.
I tested some rather "bad" g-code generated by s3d from "too precise" stl that would kill octoprint+marlin combo (stutters, blobs, crazy bad print quality), and sometimes even marlin from sd card without octoprint and duet ate it without a problem printed it perfectly (duet2eth, 3.01RC1) so I don't think RRF is anywhere close to the problem here
-
@gloomyandy TearTime for e.g. does all the calculation directly in slicer and sends "precompiled" code to the firmware so firmware only execute the stepping. Files are huge (file looks like data klipper sends to the stepping boards) but the approach has it's benefits. Major problem is that their slicer is POS and that format is closed, but the system does work rather good.
-
@arhi said in Where is the difference - 10 times X1 vs 1 times X10:
@dc42 said in Where is the difference - 10 times X1 vs 1 times X10:
If users hit the GCode throughput limit on real prints, then I'm willing to look at improving it.
I tested some rather "bad" g-code generated by s3d from "too precise" stl that would kill octoprint+marlin combo (stutters, blobs, crazy bad print quality), and sometimes even marlin from sd card without octoprint and duet ate it without a problem printed it perfectly (duet2eth, 3.01RC1) so I don't think RRF is anywhere close to the problem here
FWIW, Smoothieware also had a problem with short segments generated by S3D a few years ago. Duet/RRF ran OK on the same GCode files. For a long time the Smoothieware devs blamed S3D, which was reasonable except that it didn't help users. Eventually they put some sort of fix or workaround in Smoothieware.
-
@arhi said in Where is the difference - 10 times X1 vs 1 times X10:
@gloomyandy TearTime for e.g. does all the calculation directly in slicer and sends "precompiled" code to the firmware so firmware only execute the stepping. Files are huge (file looks like data klipper sends to the stepping boards) but the approach has it's benefits. Major problem is that their slicer is POS and that format is closed, but the system does work rather good.
Some 3DSystems printers do (or, at least, did) that, too. Leveraged the host CPU for the complex stuff, sent individual step commands in what was basically a big spreadsheet to a dumb microcontroller on board. Then the board only has to run the 'spreadsheet', and monitor temperatures and any other inputs.
Ian
-
@dc42 yes, s3d 3.0 and earlier were very bad ... and smoothieware didn't know how to handle those ... now both do better, s3d from 3.1 do not generate code as bad as 3.0 and before and smoothieware fixed the issue that they had so they can parse way more codes/sec compared to earlier. IIRC that's also when that "do not calculate junction if angle less than.." came to be
-
I did the same with the control system on the MCP/MTT/Renishaw machine. The computer read the whole build file and parsed it into exposure points a controlled distance apart. These were then sent to the optics system as single slice files. Yeah, some where vastly larger than the source data but it also cleaned up small vector issues which the real time controllers really struggled with.
But this did make alsorts of things very easy, such as part suppression, moving and offsetting parts or slice data, changing processing parameters, and reloading build files mid print for more serious changes.
-
@droftarts said in Where is the difference - 10 times X1 vs 1 times X10:
Some 3DSystems printers do (or, at least, did) that, too. Leveraged the host CPU for the complex stuff, sent individual step commands in what was basically a big spreadsheet to a dumb microcontroller on board. Then the board only has to run the 'spreadsheet', and monitor temperatures and any other inputs.
that's the dudes that purchased bitsfrombytes? the first 32bit (pic32mx based) electronics in reprap/repstrap world
When I first came in contact with TT machines (UP Plus 2) I did some research as they were in some things age in front of the reprap community and what I found is that they used the same approach on these small "home" printers as professional 500+k machines are using, and found that most of those huge machines do exactly that - just execute the "spreadsheet" as you call it, and that everything is done in the "slicer". On the other hand, UP Plus 2 uses ncp 32bit arm to execute that spreadsheet while most of home 3d printers try to do everything from parsing to planing to executing with 8bit atmega
-
@arhi It makes sense when you've got a dedicated PC connected to the printer (like in the CNC world), but less sense when you've got a general purpose PC doing it. Because if you're doing all the computation up front and creating a huge file, you've either got to stream the data to the microcontroller (so the PC shouldn't really be used for other things to avoid hiccups, which ties up a potentially expensive PC), or your microcontroller gets more complicated as you have to add things like storage and ways of listing what's on the storage. Or you get a second PC to handle the data streaming. Or you get a smarter microcontroller. All options have their merits and deficiencies!
Ian
-
@droftarts haven't we essentially got what we need already with the Duet 3 and single board computer? The Raspberry pi is likely more powerful than the Windows XP 32 bit system I was using.
This would be easy enough to write in a way that the cleaned gcode could be recomipiled back to a complete gcode file modifications and all. This would mean the software could seperately preparse data offline, making it usefull to all generations of duet and duet 3 without single board computers. ...also appeasing those who get twitchy at not seeng all the gvode prior to run.
-
@droftarts well on those old machines without dedicated computer you were dead in the water but today 128G SD card can be run from 8bit mcu .. now, no clue what power is required to step trought the "spreadsheet" but if 8bit on 16MHz can parse the g-code, calculate plan, and then step trough it I'm kinda sure it can step trough the precompiled plan
What I really didnt' like about UP was that all the printer calibration (size, skew, bed mesh...) is in the slicer so your code is not universal. You have to slice for each specific machine. I for e.g. have both up plus2 and up mini and with rather same bed if I put same size nozzle in generally g-code is identical when they are running smoothieware (as they are now, but mini will be going to duet these days) but if they are running original firmware I have to slice specifically to each printer. It has it's benefits too, there's no fiddling with firmware configuration, everything is point and click (not sure if I could adjust some of the calibration things TT allows with duet) .. anyhow we went faaaaaaaaaar away from original post ... the big take from this thread is the length of the queue
-
@arhi said in Where is the difference - 10 times X1 vs 1 times X10:
that's the dudes that purchased bitsfrombytes?
Yes. Bits from Bytes were based in Clevedon, not far from Bristol (where I am), and grew out of the very early RepRap community at Bath University (Dr Adrian Bowyer et al). I know a couple of people that worked there, including once 3DSystems took them over and effectively mothballed production, keeping it for R&D (though dictated by the US head office) and supporting existing machines in Europe.
Ian
-
@DocTrucker said in Where is the difference - 10 times X1 vs 1 times X10:
@droftarts haven't we essentially got what we need already with the Duet 3 and single board computer? The Raspberry pi is likely more powerful than the Windows XP 32 bit system I was using.
Only if you dedicate the Pi exclusively to running the print and don't try to do anything else that is CPU-intensive in it. Raspian is not a real-time operating system. RRF/DSF run the planning on the Duet (which does run a real-time operating system) so that you can do other things on the Pi at the same time, e.g. camera, complex web interface (GCode visualisation coming soon), and potentially slicing. That's also why we use a dedicated SPI interface instead of USB.
-
@DocTrucker it is what klipper is doing, precompiling the g-code in real-time and pushing the stepper instructions to the stepper boards... so all planning is done on the host (That can be RPI but also a 128core desktop pc with terabyte of ram )
-
@arhi said in Where is the difference - 10 times X1 vs 1 times X10:
@DocTrucker it is what klipper is doing, precompiling the g-code in real-time and pushing the stepper instructions to the stepper boards... so all planning is done on the host (That can be RPI but also a 128core desktop pc with terabyte of ram )
Running a real-time task such as planning on a system running a non-realtime OS and sending near real-time commands over a shared bus is IMO a dubious thing to do. But of course you can get away with it if you don't have much else competing for CPU and bus time.
-
@droftarts that was a sad day .. I used rapman 3.0 and making rapman reliably print HDPE and PP got me into reprap core team 10 years ago .. those were the times .. hand made hotends, revolutionary wade's extruder... ah.. memories nope's idea for heated bed ... there was a very strong community around bfb, prusa's first printer was rapman, erik that made ultimaker, first printer was rapman too .. kai parthy the guy who invented wood filled filament and all those lay* filaments was also there ... memories ...