top of page
  • Google+ Social Icon
  • Twitter Social Icon
  • LinkedIn Social Icon
  • Facebook Social Icon

What My Extended Chat with Chat GPT (Open AI) on the Israel/Gaza Tragedy Taught me About Machine Learning (with transcript link)

  • Writer: mstrn8
    mstrn8
  • 6 minutes ago
  • 6 min read

Max Stearns


I had a somewhat unexpected lengthy chat with Chat GPT (Open AI) about Israel/Gaza that left me both impressed and concerned. Ironically, I was impressed that it generated information unlike social media, and processed feedback unlike human beings. I was concerned that if my experience was typical, the benefits of Open AI are almost certain to flow to those already well trained in research, thereby widening the information age wedge between those who have and those who lack such valued skills. I was also concerned that people lacking such skills and a considerable degree of nuance could rely on machine learning simply to reinforce previous strongly held convictions.


When I started this conversation, I didn’t intend it as a post. Had I done so, I would likely have reframed the opening question. After receiving a fairly nuanced and balanced answer, I decided to delve more deeply. Overall, the exchange provided me with less insight into Israel/Gaza, which I know a fair bit about, than about Chat GPT, concerning which I know much less. This was my first extended chat, and I'm sure to do more.


In the remainder of this post, I elaborate on these impressions and offer some guidance for those who might wish to read the extended, separately posted, dialogue.


Here are my overall impressions:


1. It seems that Open AI responds in ways that reflect fairness in the sense of balance and nuance. It tends to eschew absolutes, and it is willing to admit error. It doesn’t always gets things right, which is a concern. Indeed, it made at least one fairly notable calculation error, which ironically seemed the kind of error that would have seemed easiest for Chat GPT to avoid.


2. When I pointed the mistake out, it reassessed, and although it’s not sentient, it did so in a manner that mimics approaching having erred thoughtfully. Indeed, it behaves in manner we might wish in human interactions yet tend scarcely to observe. It admits error and does its best to correct its mistakes. It has no ego.


3. It can be helpfully pushed in competing directions over the course of a virtual conversation. Perhaps the most interesting aspect of my conversation was that at specific junctures, I pressed it when I believed it risked rendering Israel excessively culpable, and also when I believed it risked the opposite. Pushed in either direction, it responded helpfully. That alone is quite remarkable.


4. What also impressed me is that it acknowledged that I was pushing it in a balanced, thoughtful way. AI, which today is clearly more A than I, is obviously not sentient. It's acknowledgement that I approached it with an eye toward balance suggests that Chap GPT's has the capacity to provide unsolicited positive feedback. This is largely opposite the deeply problematic feedback look that plagues social media algorithms. Those algorithms, while they don’t offer personal descriptive feedback, tend to press users into binary camps as a means of motivating ongoing active engagement. As I've previously written, this encourages deeply problematic, increasingly extreme, content in our feeds.


5. I also asked about Chat GPT about its processing. I was delighted to see it respond, entirely on its own, that it relied upon Ad Fontes Media and other sources to guard against excessive bias. As I’ve also previously posted, I am on the Board of Advisors of Ad Fontes Media, and so I found this particularly encouraging. It’s as if Open AI is doing what that company's Media Bias Chart strives to accomplish. This stands in stark contrast with social media, which too often provides precisely what the Media Bias Chart implicitly warns against.


6. Chat GPT responded helpfully when asked about the nature of the mistakes it made, which gets to the importance of careful probing. This means not accepting anything it conveys in response to a query at face value. To me this implies that at least as of now, and I suspect for some time to come, Chat GPT will prove more helpful as a research tool to those who are already quite skilled in doing research. Rather than replacing such people, it might widen the divide between those who are already excellent at highly skilled careers and those who are not. Put differently, I worry that Chat GPT will not become a great equalizer, but rather a means of further rewarding those already doing well in an information economy.


Now on the conversation itself:


My “conversation” with Chat GPT was fascinating. You’re welcome to read it. It is a bit lengthy at 5593 words. To help, I’ve inserted bolded page breaks as they appeared when I originally converted this to a Word doc for formatting. You can easily scan to follow the page references that follow. I also did some bolding and other spacing to make this easier to track.


Here are my questions and page references:


1. Is there a genocide in Gaza? (p.1)

2. Is Israel or Hamas primarily responsible for the tragedies in Gaza? (p.2)

3. Please compare civilian casualty ratios in Gaza to those of other modern urban warfare settings. (p.4)

4. Explain your math; it seems internally inconsistent in calculating the ratio in Gaza. (p.5—this question pressed Chat GPT to consider if Israel more culpable)

5. I also question your limited examples in other urban war settings and think the Gaza example is still not unusually high. Can you investigate further? (p.7—this one pressed it to consider if Israel was less culpable)

6. I think you are underestimating the number of Hamas militants; please reassess (p.9—this too pressed it to consider if Israel was less culpable)

7. Please compare the circumstances facing those living in Gaza today with refugees or in other tragic circumstances globally in terms of living conditions, loss of life, and overall suffering. (p.11)

8. I do not wish for you to try to compare human suffering across horrific conditions, but am I correct that global public attention is uniquely focused on Israel/Gaza as compared with these other seemingly larger scale tragedies, simply based on numbers, and if so, can you explain why? (p.14)

9. During this extended conversation, I sought to correct mistakes or misimpressions that sometimes made Israel appear less culpable and sometimes more so. This implies that your mistakes were not systemically biased. Three questions: (1) Do you agree with this impression? (2) If so, do you think it's generally true that your mistakes are not systematically biased? (3) Do you also agree that your calculation error seemed rather obvious, and if so, can you explain how that occurred? (p.16)

10. On bias, is there a risk that excessive news coverage that slants in favor of one side or the other in a conflict such as the one that we've been discussing will substantially bias your assessment in answering related queries, thereby producing a kind of garbage in garbage out, or are you relatively well equipped to assess the quality of the reporting sources you rely upon based on general reputation for reliability, authorship, and evidentiary support? (p.18)

11. I am a huge fan of Ad Fontes Media, and I'm actually on its Board of Advisors, so I'm very pleased that this influences your assessment of news data. Final question: I would like to share this entire rich conversation, including reproducing it on my personal blog. What is the best method of capturing fully it for that purpose? (p.20)

12. Yes, please prepare for a blog ready format, although I might prefer to use the actual transcript. I know you are a machine, but thank you. I've enjoyed our conversation, and I've learned quite a lot. I hope to do it again. (p.21—although I didn’t use this response, I included it to show the nature of its personal feedback, as discussed above and to compare with feedback loops on social media.)


Final comments:


I didn’t engage specifically to learn about the Israel/Gaza conflict so much as to learn about how Chat GPT would assess a set of related questions on a senstive topic. I welcome comments on Chat GPT, but I will not allow this as a venue for those who wish to post their competing impressions of the tragedies that have occurred and continue to occur in Israel and Gaza.


Thank you.

 
 
 

Recent Posts

See All
Heartbreak and Hope

Max Stearns I went to bed election night heartbroken, knowing the next morning I'd wake up to tragic news. I didn't expect that unlike...

 
 
 

© 2020 by Maxwell Stearns  Proudly created with Wix.com

bottom of page