Duet3D Logo Duet3D
    • Tags
    • Documentation
    • Order
    • Register
    • Login

    Maximum G-Code file size

    Scheduled Pinned Locked Moved
    General Discussion
    3
    8
    1.8k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Markdndundefined
      Markdnd
      last edited by

      Is there a maximum allowable size for G-Code files in DWC?

      I have a 66mb file which causes DWC to repeatedly restart when it tries to display the files list.
      The list displays briefly with “scanning file” next to the big one then DWC disconnects.

      (No, I don’t know why the file is that big either. It has a lot of smooth curves so probably non-optimal design on my part)

      1 Reply Last reply Reply Quote 0
      • deckingmanundefined
        deckingman
        last edited by

        I don't know what the maximum size is but I have a few that are over 100mb, one of which is 140mb and have no problems with DWC acting strangely with them. Do you maybe have some odd character in the file name? You haven't accidentally uploaded an stl instead of a gcode file have you (it's been done before)?

        Ian
        https://somei3deas.wordpress.com/
        https://www.youtube.com/@deckingman

        1 Reply Last reply Reply Quote 0
        • dc42undefined
          dc42 administrators
          last edited by

          This issue is caused by the cluster size on the SD card being small and the SD card being slow and/or fragmented. Solutions:

          1. Try firmware 1.21RC3 which contains additional code to avoid this
          2. Use a faster SD card
          3. Reformat the SD card using the largest cluster size available (64kb)
          4. If the SD card is fragmented, defragment it

          Duet WiFi hardware designer and firmware engineer
          Please do not ask me for Duet support via PM or email, use the forum
          http://www.escher3d.com, https://miscsolutions.wordpress.com

          1 Reply Last reply Reply Quote 0
          • Markdndundefined
            Markdnd
            last edited by

            Thanks - I’ll take a look at the SD card.

            Is it worth waiting for the stable release or is 1.21RC3 pretty much free of print killing bugs?

            1 Reply Last reply Reply Quote 0
            • dc42undefined
              dc42 administrators
              last edited by

              1.21RC3 works reliably for me, except for a bug that sometimes causes DWC to disconnect after I upload a file. I can reconnect immediately. Read the release notes about changes you may need to make to your homing files.

              Duet WiFi hardware designer and firmware engineer
              Please do not ask me for Duet support via PM or email, use the forum
              http://www.escher3d.com, https://miscsolutions.wordpress.com

              1 Reply Last reply Reply Quote 0
              • Markdndundefined
                Markdnd
                last edited by

                I have a delta configuration so hopefully no changes are needed.

                I checked my SD card and, as you suspected, the block size was 4KB.

                Since this is the card that was supplied with the Duet and hadn't been reformatted, I thought I'd better let you know in case any stock needs reformatting.

                As for fragmentation, is this even an issue any more?

                Back in the stone age, when I was repairing computers made out of rocks and sharpened sticks, shifting from one track to another involved physically moving the read head and took forever (relatively speaking). Fragmentation that split sequential blocks throughout multiple tracks would cause significant performance issues.

                SD cards, on the other hand, have no moving parts and generally speaking retrieving one block takes pretty much the same amount of time as any other regardless of the logical track it might be located on. Caching and fault recovery by substitution of redundant tracks makes the actual location of the block even less relevant (or even predictable)

                This means that, in theory, defragmentation is unnecessary and may actually reduce the life of the card by performing additional write operations (as you know they can only do a limited number).

                Caching, of course, might struggle a little though I suspect most algorithms are intelligent enough to predict the next block needed.

                Small block sizes on the other hand would increase communications overhead significantly - many flash cards and SSDs are specifically tuned to perform far better with sequential reads than individual 4kb reads.

                I can see that having the potential to impact performance and cause timeouts. Anyone who's ever tried to back up large numbers of files to a memory stick will have seen how massive files transfer in a matter of seconds whereas folders containing a large number of small files can takes minutes or even hours. Increasing the block size will help here.

                It should be noted that large block sizes will reduce the maximum number of small files than can be stored; though I doubt most of us will actually run into that problem even with a 4GB card.

                1 Reply Last reply Reply Quote 0
                • dc42undefined
                  dc42 administrators
                  last edited by

                  Fragmentation is less of an issue with SD cards, but it still causes the cluster link table to be spread over a larger number of sectors, which increases the time taken to seek to near the the end of the file and read information from there.

                  Duet WiFi hardware designer and firmware engineer
                  Please do not ask me for Duet support via PM or email, use the forum
                  http://www.escher3d.com, https://miscsolutions.wordpress.com

                  1 Reply Last reply Reply Quote 0
                  • Markdndundefined
                    Markdnd
                    last edited by

                    That’s where the cluster size makes a big difference though presumably you don’t have enough spare RAM to hold even one 64k cluster let alone the 2 or 3 you’d need to cache the cluster table for an 8Gb drive.

                    Working with tablets and desktops I’ve been spoiled and almost forgotten the joys of trying to cram an operating system and application into what is basically a glorified oven timer.

                    1 Reply Last reply Reply Quote 0
                    • First post
                      Last post
                    Unless otherwise noted, all forum content is licensed under CC-BY-SA