It stands to reason that if you have access to an LLM’s training data, you can influence what’s coming out the other end of the inscrutable AI’s network. The obvious guess is that…
ETA: also, you can prove a negative, it’s just often much harder. Since the person above said it doesn’t work, the positive claim is theirs to justify. Whether it’s hard or not is not my problem.
That’s not how evidence works. If the original person has evidence that the software doesn’t work, then we need to look at both sets of evidence and adjust our view accordingly.
It could very well be that the software works 90% of the time, but there could exist some outlying examples where it doesn’t. And if they have those examples, I want to know about them.
Okay. I have that. Now what?
ETA: also, you can prove a negative, it’s just often much harder. Since the person above said it doesn’t work, the positive claim is theirs to justify. Whether it’s hard or not is not my problem.
deleted by creator
Then you have your evidence, and your previous post is nonsensical.
That’s not how evidence works. If the original person has evidence that the software doesn’t work, then we need to look at both sets of evidence and adjust our view accordingly.
It could very well be that the software works 90% of the time, but there could exist some outlying examples where it doesn’t. And if they have those examples, I want to know about them.