• 0 Posts
  • 12 Comments
Joined 2 years ago
cake
Cake day: December 14th, 2023

help-circle
  • but I can accept that you meant it differently than typed.

    I did not mean it differently than typed. I said “to me it’s not a loss” and I mean that. To me it’s not a loss, to them it is.

    ml is a self choosen label though, so people can choose not to use it.

    And people can choose to treat people the same based on that label, and I can choose to think it’s immature. And at the same time I can not care if people choose to cut themselves off from ml.

    You literally say it again here where you advocate for not treating everyone the same

    No, I advocate for not dismissing people solely based on their chosen instance without taking into account their actual views. I do not advocate against blocking ml entirely, because I don’t view being cut off from people who make those types of assumptions as a loss for me.

    You’ve spent multiple comments expressing that lack of care… which doesn’t make it seem like you don’t care, but instead makes it seem like this bugs you because you feel its unfair, and you’ve said as much here.

    Only because you and other people have spent multiple comments not understanding me so I’m only repeating myself. I’m only commenting because this is the third shit stirring post by the same user, and I only chimed in to another ml user that I’m not bothered if people decide to block ml because it’s a win to me to avoid immature people and in my view a loss to others because I spend most of my time on lemmy giving tech support to others.

    But you are engaging with them… I am very confused

    No, I engaged with someone else on ml, and then you couldn’t resist engaging with me. I don’t see what’s confusing about this. I don’t think I’m missing anything by not being able to engage with this post. Yet it’s here, so I’m engaging with it. That doesn’t mean I value this engagement or would miss it at all. Let me again repeat that my preference is to ignore these types of posts, which is why I refrained from commenting on the other similar posts OP has made in the past. It’s kind of silly to assume that engagement means you necessarily value the content.

    Do you similarly think engaging with a racist by arguing with them means you must also value their content or presence? Of course not. Chiming in to say that I would not miss them, but also disagree with their views, is not contradictory.

    Anyhow, I think I’ll go back to ignoring these types of posts. I think this kind of blanket assumption about people based on their instance is a net negative to the community. That said, I don’t think it’s at all inconsistent to both view OPs attitude as immature while simultaneously not caring if they decide to block ml, precisely because being blocked by immature people is not a net negative to me.


  • To me it’s not a loss

    I think it’s a loss to other people

    These are not mutually exclusive and not contradictory at all

    That seems like you’re going out of your way to ascribe malice

    Do you not see the post at the top of this thread??

    You can’t think of any reasons someone might not want to block but might still complain?

    Imo this is not complaining, this is shit stirring. This meme isn’t even acknowledging that there are multiple types of people on ml, but advocates for treating everyone the same. Do you not see an issue with that?

    I mean here you are simultaneously advocating for not throwing out the baby with the bathwater

    This is your assumption. All I’ve said here is that I don’t care, I’m only commenting because this is the 3rd shit stirring post made by this OP, and I consider it a loss to those who block ml, but not to me. I’ve glossed over multiple posts like this from OP in the past so I clearly do not care if people view ml like this, it only reinforces the fact that I’m not missing anything by not being able to engage with people who are this immature.


  • When I say I don’t usually engage with these types of posts, it’s because I remember several others like it that I chose to ignore. Checking OPs history, it seems they posted every other one that I remember. I don’t count it as a loss at all if people like this make their own decision to block ml because I’d rather not see this kind of drama all the time. It doesn’t matter to me if they see it as a loss or a gain, I’m still gonna be here engaging with and helping people who need help in various communities. To me it’s not a loss if someone isn’t able to receive my help if they decide that it’s not worth it to them - that’s their decision to make, and I’d rather not worry about people who are going to behave like OP.

    The constant handwringing and wishing about ml users suffering more from being blocked because you don’t like certain users honestly reeks of a sore ex who wants to twist the knife. If ml bothers certain users (I’m looking at OP especially) then just block ml and stop posting about it repeatedly (looking at you OP).








  • That seems kind of like pointing to reverse engineering communities and saying that binaries are the preferred format because of how much they can do. Sure you can modify finished models a lot, but what you can do with just pre trained weights vs being able to replicate the final training or changing training parameters is just an entirely different beast.

    There’s a reason why the OSI stipulates that code and parameters used to train is considered part of the “source” that should be released in order to count as an open source model.

    You’re free to disagree with me and the OSI though, it’s not like there’s 1 true authority on what open source means. If a game that is highly modifiable and moddable despite the source code not being available counts as open source to you because there are entire communities successfully modding it, then all the more power to you.


  • It’s worth noting that OpenR1 have themselves said that DeepSeek didn’t release any code for training the models, nor any of the crucial hyperparameters used. So even if you did have suitable training data, you wouldn’t be able to replicate it without re-discovering what they did.

    OSI specifically makes a carve-out that allows models to be considered “open source” under their open source AI definition without providing the training data, so when it comes to AI, open source is really about providing the code that kicks off training, checkpoints if used, and details about training data curation so that a comparable dataset can be compiled for replicating the results.


  • It really comes down to this part of the “Open Source” definition:

    The source code [released] must be the preferred form in which a programmer would modify the program

    A compiled binary is not the format in which a programmer would prefer to modify the program - it’s much preferred to have the text file which you can edit in a text editor. Just because it’s possible to reverse engineer the binary and make changes by patching bytes doesn’t make it count. Any programmer would much rather have the source file instead.

    Similarly, the released weights of an AI model are not easy to modify, and are not the “preferred format” that the internal programmers use to make changes to the AI mode. They typically are making changes to the code that does the training and making changes to the training dataset. So for the purpose of calling an AI “open source”, the training code and data used to produce the weights are considered the “preferred format”, and is what needs to be released for it to really be open source. Internal engineers also typically use training checkpoints, so that they can roll back the model and redo some of the later training steps without redoing all training from the beginning - this is also considered part of the preferred format if it’s used.

    OpenR1, which is attempting to recreate R1, notes: No training code was released by DeepSeek, so it is unknown which hyperparameters work best and how they differ across different model families and scales.

    I would call “open weights” models actually just “self hostable” models instead of open source.