• 3 Posts
  • 122 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • No they are not “a tool like any other”. I do not understand how you could see going from drawing on a piece of paper to drawing much the same way on a screen as equivalent as to an auto complete function operated by typing words on one or two prompt boxes and adjusting a bunch of knobs.

    I don’t do this personally but I know of wildlife photographers who use AI to basically help visualize what type of photo they’re trying to take (so effectively using it to help with planning) and then go out and try and capture that photo. It’s very much a tool in that case.


  • Unfortunately proprietary professional software suites are still usually better than their FOSS counterparts. For instance Altium Designer vs KiCAD for ECAD, and Solidworks vs FreeCAD. That’s not to say the open source tools are bad. I use them myself all the time. But the proprietary tools usually are more robust (for instance, it is fairly easy to break models in FreeCAD if you aren’t careful) and have better workflows for creating really complex designs.

    I’ll also add that Lightroom is still better than Darktable and RawTherapee for me. Both of the open source options are still good, but Lightroom has better denoising in my experience. It also is better at supporting new cameras and lenses compared to the open source options.

    With time I’m sure the open source solutions will improve and catch up to the proprietary ones. KiCAD and FreeCAD are already good enough for my needs, but that may not have been true if I were working on very complex projects.


  • KingRandomGuyto3DPrintingEnder 3 V2 damage?
    link
    fedilink
    English
    arrow-up
    2
    ·
    14 days ago

    Cute cat! Nevermore and Bentobox are two super popular ones.

    Since you’re running an E3 V2, first make sure you’ve replaced the hotend with an all-metal design. The stock hotend has the PTFE tube routed all the way into the hotend, which is fine for low temp materials like PLA, but can result in off-gassing at higher temperatures such as those used by ASA and some variants of PETG. The PTFE particles are almost certainly not good to breathe in during the long term, and can even be deadly to certain animals such as birds at small quantities.


  • KingRandomGuyto3DPrintingEnder 3 V2 damage?
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 days ago

    In my experience doing a bit more than 10% can be helpful in the event of underextrusion, plus I’ve seen it add a bit more rigidity. But you’re right that there are diminishing returns till you start maxing out the infill.

    4 perimeters at 0.6mm or 6 at 0.4 should be fine.


  • KingRandomGuyto3DPrintingEnder 3 V2 damage?
    link
    fedilink
    English
    arrow-up
    3
    ·
    15 days ago

    Yeah, I agree. In the photo I didn’t see an enclosure so I said PETG is fine for this application. With an enclosure you’d really want to use ABS/ASA, though PETG could work in a pinch.

    I also agree that an enclosure (combined with a filter) is a good idea. I think people tend to undersell the potential dangers from 3D printing, especially for people with animals in the home.




  • KingRandomGuyto3DPrintingEnder 3 V2 damage?
    link
    fedilink
    English
    arrow-up
    3
    ·
    16 days ago

    IMO heat formed from stress will not be a big deal, especially considering that people frequently build machines out of PETG (Prusa’s i3 variants, custom CoreXYs like Vorons and E3NG). The bigger problem is creep, which suggests that you shouldn’t use PLA for this part.


  • KingRandomGuyto3DPrintingEnder 3 V2 damage?
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    16 days ago

    PETG will almost certainly be fine. Just use lots of walls (6 walls, maybe 30% infill). PETG’s heat resistance is more than good enough for a non-enclosed printer. Prusa has used PETG for their printer parts for a very long time without issues.

    Heat isn’t the issue to worry about IMO. The bigger issue is creep/cold flowing, which is permanent deformation that results even from relatively light, sustained loads. PLA has very poor creep resistance unless annealed, but PETG is a quite a bit better. ABS/ASA would be even better but they’re much more of a headache to print.


  • It appears like reasoning because the LLM is iterating over material that has been previously reasoned out. An LLM can’t reason through a problem that it hasn’t previously seen

    This also isn’t an accurate characterization IMO. LLMs and ML algorithms in general can generalize to unseen problems, even if they aren’t perfect at this; for instance, you’ll find that LLMs can produce commands to control robot locomotion, even on different robot types.

    “Reasoning” here is based on chains of thought, where they generate intermediate steps which then helps them produce more accurate results. You can fairly argue that this isn’t reasoning, but it’s not like it’s traversing a fixed knowledge graph or something.


  • All of the “AI” garbage that is getting jammed into everything is merely scaled up from what has been before. Scaling up is not advancement.

    I disagree. Scaling might seem trivial now, but the state-of-the-art architectures for NLP a decade ago (LSTMs) would not be able to scale to the degree that our current methods can. Designing new architectures to better perform on GPUs (such as Attention and Mamba) is a legitimate advancement. Furthermore, the viability of this level of scaling wasn’t really understood for a while until phenomenon like double descent (in which test error surprisingly goes down, rather than up, after increasing model complexity past a certain degree) were discovered.

    Furthermore, lots of advancements were necessary to train deep networks at all. Better optimizers like Adam instead of pure SGD, tricks like residual layers, batch normalization etc. were all necessary to allow scaling even small ConvNets up to work around issues such as vanishing gradients, covariate shift, etc. that tend to appear when naively training deep networks.


  • I agree that pickle works well for storing arbitrary metadata, but my main gripe is that it isn’t like there’s an exact standard for how the metadata should be formatted. For FITS, for example, there are keywords for metadata such as the row order, CFA matrices, etc. that all FITS processing and displaying programs need to follow to properly read the image. So to make working with multi-spectral data easier, it’d definitely be helpful to have a standard set of keywords and encoding format.

    It would be interesting to see if photo editing software will pick up multichannel JPEG. As of right now there are very few sources of multi-spectral imagery for consumers, so I’m not sure what the target use case would be though. The closest thing I can think of is narrowband imaging in astrophotography, but normally you process those in dedicated astronomy software (i.e. Siril, PixInsight), though you can also re-combine different wavelengths in traditional image editors.

    I’ll also add that HDF5 and Zarr are good options to store arrays in Python if standardized metadata isn’t a big deal. Both of them have the benefit of user-specified chunk sizes, so they work well for tasks like ML where you may have random accesses.


  • I guess part of the reason is to have a standardized method for multi and hyper spectral images, especially for storing things like metadata. Simply storing a numpy array may not be ideal if you don’t keep metadata on what is being stored and in what order (i.e. axis order, what channel corresponds to each frequency band, etc.). Plus it seems like they extend lossy compression to this modality which could be useful for some circumstances (though for scientific use you’d probably want lossless).

    If compression isn’t the concern, certainly other formats could work to store metadata in a standardized way. FITS, the image format used in astronomy, comes to mind.





  • It really depends on what you’re looking for. Are you just looking to learn how to print new materials, or do you have specific requirements for a project?

    If it’s the former, I’d say the easiest thing to try is PETG. It prints pretty reasonably on most printers though has stringing issues. It has different mechanical properties that make it suitable for other applications (for example, better temperature resistance and impact strength). It’ll be much less frustrating than trying to dial in ABS for the first time.

    ABS and TPU are both a pretty large step up in difficulty, but are quite good for functional parts. If you insist on learning one of these, pick whichever one fits with your projects better. For ABS you’ll want an enclosure and a well ventilated room (IMO I wouldn’t be in the same room as the printer) as it emits harmful chemicals during printing.



  • I’m a researcher in ML and LLMs absolutely fall under ML. Learning in the term “Machine Learning” just means fitting the parameters of a model, hence just an optimization problem. In the case of an LLM this means fitting parameters of the transformer.

    A model doesn’t have to be intelligent to fall under the umbrella of ML. Linear least squares is considered ML; in fact, it’s probably the first thing you’ll do if you take an ML course at a university. Decision trees, nearest neighbor classifiers, and linear models all are machine learning models, despite the fact that nobody would consider them to be intelligent.


  • Yeah, I agree that it does help for some approaches that do require a lot of VRAM. If you’re not on a tight schedule, this type of thing might be good enough to just get a model running.

    I don’t personally do anything that large; even the diffusion methods I’ve developed were able to fit on a 24GB card, but I know with the hype in multimodal stuff, VRAM needs can be pretty high.

    I suspect this machine will be popular with hobbyists for running really large open weight LLMs.