• 0 Posts
  • 7 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle
  • I stopped running with music when I ran a half marathon once and about 17km in I just started getting annoyed by it. I’m out there dying, and some asshole is screaming into my ears.

    Idk, I enjoy running by itself. I ran a full marathon without music and didn’t get bored once. I’d either just enjoy myself, think about random stuff, look around me, play music / sing in my mind etc. But to each their own I guess.


  • AccountMaker@slrpnk.nettomemes@lemmy.worldNot fair
    link
    fedilink
    arrow-up
    10
    ·
    1 month ago

    I memorized 100 digits some years ago using physical memory. I would type the digits of pi on the numpad and memorize the movements of my hand, how it feels and which button goes when by position. Then when I would have to recite it, I’d imagine a numpad, move my hand and just say the number that corresponds to the imaginary button I’m pressing.

    Don’t know if that could work for 70k digits though


  • Image recognition depends on the amount of resources you can offer for your system. There are traditional methods of feature extractions like edge detection, histogram of oriented gradients and viola-jones, but the best performers are all convolutional neural networks.

    While the term can be up for debate, you cannot separate these cases and things like LLMs and image generators, they are the same field. Generative models try to capture the distribution of the data, whereas discriminitive models try to capture the distribution of labels given the data. Unlike traditional programming, you do not directly encode a sequence of steps that manipulate data into what you want as a result, but instead you try to recover the distributions based on the data you have, and then you use the model you have made in new situations.

    And generative and discriminative/diagnostic paradigms are not mutually exclusive either, one is often used to improve the other.

    I understand that people are angry with the aggressive marketing and find that LLMs and image generators do not remotely live up to the hype (I myself don’t use them), but extending that feeling to the entire field to the point where people say that they “loathe machine learning” (which as a sentence makes as much sense as saying that you loathe the euclidean algorithm) is unjustified, just like limiting the term AI to a single digit use cases of an entire family of solutions.


  • They’re functionalities that were not made with traditional programming paradigms, but rather by modeling and training the model to fit it to the desired behaviour, making it able to adapt to new situations; the same basic techniques that were used to make LLMs. You can argue that it’s not “artificial intelligence” because it’s not sentient or whatever, but then AI doesn’t exist and people are complaining that something that doesn’t exist is useless.

    Or you can just throw statements with no arguments under some personal secret definition, but that’s not a very constructive contribution to anything.


  • What?

    If you ever used online translators like google translate or deepl, that was using AI. Most email providers use AI for spam detection. A lot of cameras use AI to set parameters or improve/denoise images. Cars with certain levels of automation often use AI.

    That’s for everyday uses, AI is used all the time in fields like astronomy and medicine, and even in mathematics for assistance in writing proofs.



  • lol, actually, good science would be on the left side of the image, at least after giving an answer to a question. Good science will actually prove something, then give the answer, then have no reason to continue to find another answer for it (whatever the issue is.) If you are giving a different answer year after year (like say for the age of the earth), then aren’t you admitting that so far you haven’t known the answer?

    That’s not really the take of the modern philosophy of science. All modern schools of thought when it comes to science have the acceptance of falsehoods embedded into their nodels. I’ll give a few examples:

    Karl Popper famously stated that science cannot prove that anything is true, only that something is false. Thus, any scientific theory that’s still accepted is regarded as not yet being proven wrong. Science is just a cycle of giving theories, proving them wrong, giving new ones to account for the problem of the old one and so on, ever getting closer to the truth, but never arriving.

    Thomas Kuhn wrote about scientific paradigms, which are models of the field in question that every scientist uses (for example Aristotelian motion, which was surpassed by Newtonian mechanics, which were surpassed by Einstein’s relativity). During the period of “normal science”, scientists are using their established methods until they end up with too many problems they cannot resolve, at which point it is accepted that the paradigm cannot hold up, and a scientific revolution needs to bring forth a new paradigm, that is incomparable with the old one. Some knowledge is lost in this process, but we move on until the next crisis.

    Paul Feyerabend wrote about countet-induction, which prevents science becoming a dogma. An example he gives is Copernicus going completely against the science of his time with his heliocentric system. The Ptolemaic system was as cutting edge science back then as quantum mechanics is today.

    All in all, findings being continuously disproven and replaced by new ones is not bad science, it is science. Achieving actual, “true”, positive knowledge of the world, documenting it and saying “that’s it, we solved this problem, we’re done” is not something modern science event attempts at.