Grok cannot detect AI content - defends Chinese propaganda

AI bases "decisions" on all data. GIGO.

How does AI know truth, correctness and best for a patient? Oh I know, data.............

Doctors make mistakes all the time and for the most part are held accountable, except when it's groups of Dr's like the AHA. Sort of get hidden the weeds and we all pay. AI will be worse.

I will take the last part back. AI might actually be better than the AHA. Maybe if free from pharma influence
Human follows guideline after a committee decide what is the best compromise a decision maker can do and set boundary for the rule. Won't solve all problems but that's what? A starting point?

AI will have to do the same, you can't just let AI do everything like you can't let a 15 year old do everything after he finish reading 200 books. I wouldn't let a 15 year old do my surgery unless there are procedure he or she must follow like a real surgeon. Human has to pass exams and do residency, AI will eventually do the same.
 
For those that don't ever see web-logs, here's something to know. Unlike google, bing, other search engines and several AI systems / bots, Grok does not operate or scan / search the web-at-large with a bot.

There is no "Grok-bot". It's generally agreed within the web-master community that Grok gets it's information, it's training, it's knowledge of the world and current events, from X postings. Perhaps additionally other platforms like direct agency news feeds, or social media like reddit, 4-chan, instagram, tik-tok - but that would only be confirmed by the operators of those platforms.
 
I see.

You say that a student wondering why they needed to learn to write given the existence of AI was the worst night of your professional life.

But I'm curious-- would you be willing to share what your specific response was to that student?
Well, it took me 2-3min to figure out what to tell him. So I gave him historical/philosophical significance of literacy. But students like that don’t care. He wants paper.
So, i told him what if his boss/supervisor wants 5 page policy recommendation? You use AI and then boss wants explanation how and why, in a meeting. He better knows it inside out how that new policy helps his institution, those they represent (citizens) and why.
This is where they get in trouble with AI. The reason he asked that is that he had to do review of book “Army at Dawn.” And frustration came out of it that AI always gives generic answers and it is very easy to figure it out. Hence that dumb question. Frustration that he has to read 600+ pages, write review with all why’s, how’s etc. On top of other reading and writing.
So in his mind, why bother?
 
Well, it took me 2-3min to figure out what to tell him. So I gave him historical/philosophical significance of literacy. But students like that don’t care. He wants paper.
So, i told him what if his boss/supervisor wants 5 page policy recommendation? You use AI and then boss wants explanation how and why, in a meeting. He better knows it inside out how that new policy helps his institution, those they represent (citizens) and why.
This is where they get in trouble with AI. The reason he asked that is that he had to do review of book “Army at Dawn.” And frustration came out of it that AI always gives generic answers and it is very easy to figure it out. Hence that dumb question. Frustration that he has to read 600+ pages, write review with all why’s, how’s etc. On top of other reading and writing.
So in his mind, why bother?

Definitely frustrating when students have no interest in learning and just want to get through a class by phoning it in. I have experienced that many times first-hand.

But originally you said:

To which one student in the Spring semester asked me (he works for one of the security agencies, federal): "why do we need to know how to write, considering we have AI?"

A student inquiring as to the utility of "learning to write" (which involves a time-consuming diversion into a world of random, tedious, and often nonsensical rules) when an effective tool for performing that task exists seems a very different thing than what you describe here-- a student uninterested in learning an academic topic at a university level of depth and rigor.

I would be very interested to know your thoughts on the connection between a person's being able to write well and their ability to understand and communicate a topic at depth. With statements like...

I told my students that any hint of AI, and you are gone from this program, possibly from the university.

... I would assume that you think the two are fundamentally linked.

Am I correct on that?
 
AI anything should be banned on BITOG. 🙁

PLEASE ban this AI crap.

AI crap is for Reddit, not BITOG.

Please ban this AI trash.
What are you trying to say?

That we can’t talk about it? - we can.

That we can’t post AI generated content? - Also true, and has been true for years.

 
Definitely frustrating when students have no interest in learning and just want to get through a class by phoning it in. I have experienced that many times first-hand.

But originally you said:



A student inquiring as to the utility of "learning to write" (which involves a time-consuming diversion into a world of random, tedious, and often nonsensical rules) when an effective tool for performing that task exists seems a very different thing than what you describe here-- a student uninterested in learning an academic topic at a university level of depth and rigor.

I would be very interested to know your thoughts on the connection between a person's being able to write well and their ability to understand and communicate a topic at depth. With statements like...



... I would assume that you think the two are fundamentally linked.

Am I correct on that?
Well, his question is why need to write if having AI? With students like that you can go back to basics about enlightenment and renesaince, but that is fools errand. Or you can try what that practically means at their job.
As for AI, they will try to use it to avoid what are two major assignments; book review for mid term and policy paper for the end of semester that has to resemble academic paper when it comes to research rigor, length etc. They will try to beat the system. Some won’t (usually know which ones).
Any use of AI is absolutely forbidden. And if the use, it us actually easy to figure out. Using AI later as help tool once they know material, methodology etc. is fine. But, learn first basics. For example English is my second language. I use tools that are now AI to help wiyh grammar. But even then, I must review it as AI driven tools change nature of sentence often.
 
Any use of AI is absolutely forbidden. And if the use, it us actually easy to figure out. Using AI later as help tool once they know material, methodology etc. is fine. But, learn first basics. For example English is my second language. I use tools that are now AI to help wiyh grammar. But even then, I must review it as AI driven tools change nature of sentence often.

Ok, so using AI as an aid after understanding the material is fine and, in fact, you (as a professor) do it yourself. So, presumably you do not think there is a negative implication of using AI as a tool once the material is learned and understood at a sufficient depth. I do not disagree with that, nor do I think you're doing anything wrong using AI as a tool to help you with writing in a non-native language.

But not only do you forbid your students from using it, you experienced the most depressing day of your career when a student inquired as to why it is important to spend all the time and effort to learn high-level writing skills when AI can handle the task effortlessly?

That line of reasoning is consistent only if there was an axiomatic link between strong writing and one's ability to understand material at depth. But you yourself seem to provide a counterexample to that so, at least on its face, it seems highly inconsistent. Am I missing something?

(Perhaps I am, so see my question below as it might clarify things for me).

As for AI, they will try to use it to avoid what are two major assignments; book review for mid term and policy paper for the end of semester that has to resemble academic paper when it comes to research rigor, length etc. They will try to beat the system. Some won’t (usually know which ones).

Would you say that your use of AI is "beating the system?" Since you appear to be highly credentialed in an academic area, I would tend to say no, you're effectively using an excellent tool to bridge a gap that you would certainly be capable of doing yourself but would probably be a waste of your time.

Perhaps the particular student you brought up was an example of someone trying to "beat the system" in the sense that they were trying to pass a class without learning the material, but:

1) I think most students are asking this question on an honest basis of wanting a rationale as to why they shouldn't avoid wasting time learning to write well when AI exists, an approach you seem to have taken yourself.

2) Why bring that student up to support your view that inquiring about the need to learn to write in an AI world is fundamentally depressing? Based on your response here, it would seem that if the student had been a strong one, you would not have been so depressed, so what point were you making with that example?

Well, his question is why need to write if having AI? With students like that you can go back to basics about enlightenment and renesaince, but that is fools errand. Or you can try what that practically means at their job.

Since I am ignorant of the connection, could you give a brief outline of how the Enlightenment and Rennaissance provide the basics of why it's important to spend time and effort learning to write when AI can do that job?

I'm interested in that and even more interested as to why if this rationale is so basic that you don't adhere to its principles and take the time to learn how to write well in English and thus avoid needed AI as a tool to help you with that task.
 
Last edited:
I can already see the all popular slogan "learn to code" changing to "learn to AI promt"
If you were one of those pipe welders that, not too long ago, were told "learn to code" and you did, well you're out of luck... again...

I'm curious about your conclusion.

Are you saying that pipe welders-turned-programmers are capable of programming but not prompting?
 
I'm curious about your conclusion.

Are you saying that pipe welders-turned-programmers are capable of programming but not prompting?
It was a joke and the term "learn to code" became a meme. My speculation is that promt engineering may be the next meme.

No experienced welder would switch to be a programmer just because a senile politician caused the loss of their job and told them they should learn how to code. Good welders are in very high demand.
 
Ok-- I'm neither a programmer nor a welder, so no offense taken. But I guess I'm not getting the joke.

What is the structure of the joke, and how does the idea that the welders will be out of luck... again... fit into that structure? Especially if the point was only that a meme would be changing (as they always do anyway).

Just not understanding the joke or the point being made with it, especially if, as you state, welders won't care either way due to their high pay.

Seems their luck is pretty good.

No need to respond if you don't care about allaying my confusion as that would be an endless task with questionable likelihood of success.
 
I'm sure AI can detect it, we just need the right tools / prompt.

I didn't watch the video but I bet Palantir could tell in a second if it's real or not.

They probably have the whole Maxar / Planet labs global imaging database in their system. Just cross check the image with their library etc.
 
Ok-- I'm neither a programmer nor a welder, so no offense taken. But I guess I'm not getting the joke.

What is the structure of the joke, and how does the idea that the welders will be out of luck... again... fit into that structure? Especially if the point was only that a meme would be changing (as they always do anyway).

Just not understanding the joke or the point being made with it, especially if, as you state, welders won't care either way due to their high pay.

Seems their luck is pretty good.

No need to respond if you don't care about allaying my confusion as that would be an endless task with questionable likelihood of success.

The joke originated around 2020 when lots of people were loosing jobs and "learn to code" was a pretty common expression, as programmers were in high demand.
It became famous and a meme when a certain pipeline project was cancelled and a certain senile individual told these people they should learn to code.

My joke is that AI is taking over writing code, so the individuals that, hypothetically did try to learn to code, are out of luck yet again.
 
The joke originated around 2020 when lots of people were loosing jobs and "learn to code" was a pretty common expression, as programmers were in high demand.
It became famous and a meme when a certain pipeline project was cancelled and a certain senile individual told these people they should learn to code.

My joke is that AI is taking over writing code, so the individuals that, hypothetically did try to learn to code, are out of luck yet again.
I think the new version of the joke is "Learn plumbing". That's supposed to be the last field that Ai / robotics will take over. This is what some Ai futurest have been saying/joking about as well.
 
At my work we can use AI tools to help with many tasks, I pretty much never use it, but many do.
We have fired many individuals because they unknowingly or knowingly uploaded sensitive information to AI when promting.
We're talking mechanical/electronic engineers here, not young students or anything like that.

To me it's proof people turn off/lower their cognitive functions when using it. And despite many warnings, emails and constant communications about it, people are still getting fired for it.
 
The joke originated around 2020 when lots of people were loosing jobs and "learn to code" was a pretty common expression, as programmers were in high demand.
It became famous and a meme when a certain pipeline project was cancelled and a certain senile individual told these people they should learn to code.

My joke is that AI is taking over writing code, so the individuals that, hypothetically did try to learn to code, are out of luck yet again.

Yes, I'm well aware of the meme (and it started much earlier than 2020).

But the only way the joke makes sense is if the proposition "the ones who learned to code are out of luck because their programming skill has been rendered moot by AI and the new skill will be prompting the AI" is true (which I'm pretty sure is the point you were making originally).

Thus, it seems clear that you were saying that the welders will be made moot by prompting (a point repeated in the last sentence of your quoted response) and I was wondering why.

If the point was that all programming jobs will be moot with AI, then:

1) Why bring up welders specifically as all programmers will be out of luck?

2) If programming is made moot by AI, it would seem that those who were welders will have the best luck as their previous skill is still in good demand (per your subsequent comment). It's the non-trade skilled programmers that will really be out of luck.
 
Yes, I'm well aware of the meme (and it started much earlier than 2020).

But the only way the joke makes sense is if the proposition "the ones who learned to code are out of luck because their programming skill has been rendered moot by AI and the new skill will be prompting the AI" is true (which I'm pretty sure is the point you were making originally).

Thus, it seems clear that you were saying that the welders will be made moot by prompting (a point repeated in the last sentence of your quoted response) and I was wondering why.

If the point was that all programming jobs will be moot with AI, then:

1) Why bring up welders specifically as all programmers will be out of luck?

2) If programming is made moot by AI, it would seem that those who were welders will have the best luck as their previous skill is still in good demand (per your subsequent comment). It's the non-trade skilled programmers that will really be out of luck.
It was a joke man, if you're trying to rationalize it away, what do you want me to say?
 
It was a joke man, if you're trying to rationalize it away, what do you want me to say?

You don't have to say anything. I was just trying to make sense of it.

I think that rather than making a joke (that made no sense as a joke) you were making a point related to the topic here.

The point made sense, but I was questioning the foundation of it since I think the premise of the point is false and pointing that fact out was highly relevant to the topic of this thread.

I think calling it a joke was you trying to rationalize away my questioning of your premise, which is fine (and very common).


To me it's proof people turn off/lower their cognitive functions when using it. And despite many warnings, emails and constant communications about it, people are still getting fired for it.

Not so sure about that conclusion either-- it's stunning how many educated professionals fall for and click on phishing attempts, sometimes locking down entire companies with ransomware as a result.

People can be very gullible and unthinking-- I don't think there's any connection to any influence AI might have on cognitive ability.
 
At my work we can use AI tools to help with many tasks, I pretty much never use it, but many do.
We have fired many individuals because they unknowingly or knowingly uploaded sensitive information to AI when promting.
We're talking mechanical/electronic engineers here, not young students or anything like that.

To me it's proof people turn off/lower their cognitive functions when using it. And despite many warnings, emails and constant communications about it, people are still getting fired for it.
Who knows what happens to the data once it's uploaded. Is it stored, does it help build the Ai model? I think Claude recently added an opt-out for this and it is one by default IIRC.
 
Back
Top Bottom