lol, its insane the inaccuracy of the LLM in that. It made me chuckle. I can give it 1 and 2 as I gave it already, but for the sake of the chuckle I’ll start from the bottom to top:
25: its my opinion, can’t be a fallacy as it’s based off opinion, it’s not meant to change anyone’s mind or anything, its stating how I felt.
24: I’m not assuming anything, it was a question, the LLM can’t interpret it apparently.
23: it’s ignoring the critera/information I had already supplied prior to it thinking its based off objective
22: Again, its based off the fact that more users = better ability to find documentation and sharing it. this isn’t fallacious in nature.
21: this was just a warning cause I’ve seen it myself (I have 6 posts in the past 4 weeks that were technical and ended up being nuked)
20: see 24
19: my experiences are somehow an appeal to common practice? Like that’s my experiences with it
18: unrelated to the current discussion but I can see why it would have it
17: again my actual experiences with it that doesn’t make it a fallacy
16: I love that it’s trying to say I don’t know my friend group’s shell usage as if I don’t share scripts with them already.
15: Has nothing to do with the argument and is actually a misdirection in itself.
14: I’ve always argued both metrics, I don’t see where it’s seeing a goalpost moving here… lol
13: I said the exact opposite of what it’s claiming. I acknowledged that it would require effort and that wasn’t something I wanted to do
12: I didn’t assume it was better in this case, I stated since it was easier to find scripts, it was less work to do
11: I never claimed the stated assumption here, I stated why I did. It was counter intuitive to me, that doesn’t mean it’s not intuitive to others, that itself is also an illicit minor
10: changes the comment away from my personal experience and tries to redirect it into a reason why others shouldn’t use it.
9: Yes I agree it’s a generalization, that was the entire point of that, to show that most of my experiences shown that, and as such why I don’t use it.
8: Invalid, I’m not attacking you, I even acknowledged that I can see why some people use it, I just can’t
7: This isn’t a slippery slope as it’s accurate. There is less info available on fish shell, just due to the length of amount of time it’s available.
6: Invalid claim
5: I can kind of see this, but it’s not like I don’t think it doesn’t have it’s merits, its just not for me.
4: Such evidence is bad on it’s own, but when supported by facts it’s valid
3: I don’t think I understand this linking to authority but LLM’s definitely struggled converting bash to fish for me.
2: already explained this one in parent post
1: same as 2
I love LLM’s at times, I can understand some info they give but, man do they not know how to read dialogue.
A couple of your refutations are sound, a couple more iffy (may hold up), and the rest, fail and even add more fallacies. XD … that’s a lot of flexing Brandolini’s law. Not sure it’s worth it. In the attempts to refute, added something like 38 new fallacies, for a total of about 57 fallacies now, kinda overshadows the 2-5 refuted. I’m out. Enough red flags.
I had to look up what that was because I’ve never heard of that law. I like that, and will likely use that in the future. I have to agree though, but from the other direction. Yeah, my response post to the LLM fallacy list was a lot of that, I knew it would be going into it though as any LLM response interpreting data on a specific level like that, generally needs that. That’s ultimately why I don’t like using LLMs in the first place. Because you have to go back and fix it anyway. (Note the LLM say that’s a fallacy since it doesn’t /always/ happen, and it will also post a fallacy saying that I’m saying that it will do so but I digress)
I responded to the first fallacy post mostly to show how inaccurate LLMs can be when you use it to interpret dialogue. They are great for summarizing concepts and finding data sets, But actually classifying or generating information is one of their weak points. A good chunk of the claimed fallacies could have been summarized by it either misinterpreting the post, ignoring other parts of the post and or giving a redirection in order to fit it’s mantra. And some of them just straight out added additional fallacies to the mix as well.
And yea, I agree. I’m done with this conversation as well. We’re no longer talking about fish anymore, the topic adjusted to isn’t crediting or discrediting the initial posts since it’s all built off LLM false attribution & strays away from the topic of the community lol
lol, its insane the inaccuracy of the LLM in that. It made me chuckle. I can give it 1 and 2 as I gave it already, but for the sake of the chuckle I’ll start from the bottom to top:
I love LLM’s at times, I can understand some info they give but, man do they not know how to read dialogue.
A couple of your refutations are sound, a couple more iffy (may hold up), and the rest, fail and even add more fallacies. XD … that’s a lot of flexing Brandolini’s law. Not sure it’s worth it. In the attempts to refute, added something like 38 new fallacies, for a total of about 57 fallacies now, kinda overshadows the 2-5 refuted. I’m out. Enough red flags.
I had to look up what that was because I’ve never heard of that law. I like that, and will likely use that in the future. I have to agree though, but from the other direction. Yeah, my response post to the LLM fallacy list was a lot of that, I knew it would be going into it though as any LLM response interpreting data on a specific level like that, generally needs that. That’s ultimately why I don’t like using LLMs in the first place. Because you have to go back and fix it anyway. (Note the LLM say that’s a fallacy since it doesn’t /always/ happen, and it will also post a fallacy saying that I’m saying that it will do so but I digress)
I responded to the first fallacy post mostly to show how inaccurate LLMs can be when you use it to interpret dialogue. They are great for summarizing concepts and finding data sets, But actually classifying or generating information is one of their weak points. A good chunk of the claimed fallacies could have been summarized by it either misinterpreting the post, ignoring other parts of the post and or giving a redirection in order to fit it’s mantra. And some of them just straight out added additional fallacies to the mix as well.
And yea, I agree. I’m done with this conversation as well. We’re no longer talking about fish anymore, the topic adjusted to isn’t crediting or discrediting the initial posts since it’s all built off LLM false attribution & strays away from the topic of the community lol