namanyayg 2 days ago | next |

This is something I've been thinking about a lot recently

To add to that: human behaviour will also significantly change in response to LLMs.

E.g. think how we used a lot of "Google-fu" back in the old days, now something similar is happening with "prompt engineering".

I notice this often with my gf, she sometimes prompts the llm which leads it on to a biased answer, other times her prompt is open ended in a way which leads to ambiguous answers.

keybored 2 days ago | prev |

For humans: Oh these fifteen different tools don’t give remotely uniform output, but come on it is trivial to transform them just use some sed and awk and cut—wait, you got Perl right? No, you don’t need to know Perl, you just need to use these six switches and remember the structure of these one-liners. ... Oh, it choked on some weird whitespace at the beginning of the file that wasn’t “regular space”? Right, did you get that file from a Windows user by any chance? ... You tried to parse ls output by multiple columns? No ugh come on, ls output is different stdout is not going to a terminal, what the hell are you doing?

For AI: Your content is written in unsemantic HTML with too many divs! My poor little genius LLM got all confused, aww honey... will you fix your dang content, please? Can’t you see that she is hurting?

mring33621 2 days ago | root | parent |

Is your HTML page content understandable if I strip all the HTML tags out of it?

If not, it's probably not great with the tags in, either.