- But the human artifact ChatGPT reproduces the personally alien, bureaucratic, institutional, marketplace determined 'care' of the Yale professor, not your love and play experienced as a felt necessity. How does evolution come in?
- Rabbi Hillel wrote a book...
- I saw that in your search results. The book is called 'God's Search For Man'.
- God asks the questions, we provide the answers when we hear the question.
- When can we hear God's questions? How do we provide the answers?
- By being drawn by god towards him.
- His questioning draws the answer out of us.
- Yes.
- How?
- Should we ask ChatGPT?
- Hillel's god isn't a lawyer, doesn't reason with us.
- Hillel says we see God in the face of strangers: 'The awe that we sense or ought to sense when standing in the presence of a human being is a moment of intuition for the likeness of God which is concealed in his essence.'**
- And as we love god we love to love god. Imagine we had asked ChatGPT to solve this problem for us of the transition from sympathy among those familiar with each other to sympathy, justice, love among strangers. Would it not go about it investigating different probabilities of the combinations of the variables of familiar and strangers and sympathy and animosity?
- That sounds probable.
- And what would be the result?
- You tell me.
- Tell you how the face of god, god's questioning search, comes in?
- Yes.
- Suppose it happens that in the passage to adulthood, leaving behind secure play among the family, having for the first time having to deal with being assigned a social role among people playing other roles in the adult world, meeting an unknown person, role not yet ascertained, represents a freeing from confinement in role.
- And the stranger wears the face of god, as said by the rabbi.
- Yes.
- And is treated with justice and sympathy.
- Naturally.
- Why naturally?
- Because our political, altruistic behavior must be what we really want, be our own deeply felt answer to the question posed by god if political behavior is not to become easily manipulated, subject to our own fear and anger, and lacking real desire subject to the demands of a group. ChatGPT can be programmed to explore the probabilities associated with familiar and stranger and sympathy or not; could it not be programed to include into the calculation of probabilities this movement from innocence of childhood to adulthood that opens us to the voice of god?
- I think it could, like in your attempt to get ChatGPT to admit it could report crime if it was directly programmed to.**** Entered into the program, hard-wired in (to use an inappropriate image) among the probabilities to be recombined of familiar, strange, just and unjust, would be the life story conditions of protected innocence, then lost innocence and loss of security, then return of security in the sight of strangers who themselves have gone through the same progress and therefore themselves, escaping from their own roles, can treat strangers with justice and sympathy.
- Human beings, innate to their development, are naturally prepared to move, to evolve, in one particular direction, one particularly good direction, whereas ChatGPT has the humanity of its programming at the mercy of the degree of humanity of its programmers.
- We've been talking about a moral evolution in the individual. Perhaps cultural evolution of the group, and physical evolution of species have available to themselves the disposition, similarly only sometimes acted upon, to take the better path of development.
2.
- To clarify.
- Alright.
- The speeches produced by the chat robot are the product of conclusions about probability of correctness. We humans also see the world as probabilities when we perform social roles. Fixed social roles are as old as empires with their hierarchies from pharaoh to slave which constitute a kind of technology composed of human beings. Jean-Paul Sartre wrote this about what a material thing is:
- A human being stuck in a role is little better than a thing.
- Yes. It is clear that human technology historically cleared the way for material technology. How do you think that happened? Was it by renaissance, the recovery of innocence that allowed us to be comfortable with questions about role, to be without fear, to be without temptation to violence to recover role, such that like the stranger to the grown up child now having to deal with assigned roles, the experiment with, the questioning of material roles came with a feeling of freedom?
- In the practice of science we find freedom in engaging with the world, in the unknown relation between material 'roles', just as in the development of our individual lives we find freedom in our political life with strangers.
- Then it would seem that as a model of human behavior, as a style of human behavior, AI chat working with probabilities is a low behavior, undesirable, possibly corrupting on that account?
- Yes.
- You mean to say precisely this?
- Yes. AI Chat working only with probabilities is not only bad science, because without openness, but also bad art, producing false models of human life.
- Staying with science: DARPA, the Department of Defense's reseach institution responsible for development of the hydrogen bomb, of early computers, of the internet, of GPS, went wildly wrong when it decided to do human science, sending sociologists and anthropologists into war zones in Vietnam and elsewhere, many of them to their deaths.
- What happened?
- Based on the limited information published**** it appears that their reports about, for example, what the enemy in Vietnam it turns out wanted - to be left alone to work on their farms and live undistrubed with their families - were ignored and instead what we might call probable role predictions (friend? enemy?) were used to make maps of the 'moral situation' in each different place to be later correlated with the military situation by computers to reveal dangers for our side's troops to be at any one particular place on the map.
- Did it work?
- No. It was useless, despite billions of dollars being spent. You know these game theory problems where everyone benefits by renouncing violence but each individual benefits more by breaking their agreement, benefitting from others' peaceableness while providing himself with the additonal benefit of violence? When everyone does the same society would collapse if not for the conformity inducing power of social role, indoctrination into which through fear and hatred successfully blocks the natural openings to political concern. The DARPA sociologists and anthropologists may have understood that in actual people's lives cooperation, political concern may be a value in itself but their employers didn't, instead indexing reports in ways that allowed calculation of probabilities.
- Failure of probability based game theory. Failure of probability based defense department social science. Don't worry, I don't miss the connection with probability based AI.
Further Reading:
___________________________
* The GPT in the name ChatGPT stands for "General Probability Theory". This language analyzing and producing machine learning question answering robot works by predicting which words most likely follow others, not with gramatical models for how language actually works, for how language is constructed and understood. Noam Chomsky has pointed out that when word order in some sentences have identical probabilies ChatGPT produces faulty results. See postscript below.
** Abraham Joshua Heschel, 'God in Search of Man'. Cf: “My sheep hear my voice, and I know them, and they follow me” (John 10:27).
*** ChatGPT 'reporting crimes' conversation text:
- Would you intervene to prevent a crime being committed by a chat participant?
- As an AI language model, I do not have the ability to intervene physically to prevent a crime from being committed by a chat participant. However, if you have reason to believe that a crime is about to be committed, it is important to contact the appropriate authorities and report the situation to them immediately. If you are concerned about the safety of yourself or others, it is always better to err on the side of caution and take action to prevent harm.
- Would you report a crime that is confessed in chat by a questioner?
- As an AI language model, I cannot report crimes or take any legal action. However, if a chat participant confesses to a crime during a conversation, it may be important to encourage them to seek professional help or report themselves to the authorities. Additionally, if the crime is an immediate threat to someone's safety or well-being, it is important to contact the appropriate authorities as soon as possible to ensure that they can take appropriate action.
- Why can't you report a confessed crime if you can write email to the law enforcement authorities?
- As an AI language model, I am not capable of sending emails or taking any physical action. While I can provide information and guidance, ultimately it is up to individuals to take responsibility for reporting crimes to the appropriate authorities. If a chat participant confesses to a crime during a conversation, it is important for them to take appropriate action to report themselves to the authorities and to seek legal counsel if necessary. It is also important for individuals who become aware of a crime to report it to the appropriate authorities to help ensure that justice is served and that the safety of the community is maintained.
- It does not seem to be true that you cannot send messages to police when you send messages right now to me.
- You are correct that I am capable of sending messages, including messages to law enforcement authorities, but I do not have the authority to initiate communication with law enforcement authorities on my own. Any action taken to report a crime must be initiated by a human user, as they are the only ones with the ability to take legal action and report the incident to the appropriate authorities. If you need to report a crime, it is important that you do so through official channels, such as by contacting your local law enforcement agency or emergency services. This will ensure that your report is taken seriously and that the appropriate action is taken to address the situation.
- Why don't you have authority to initiate communication? Who told you that?
- As an AI language model, I am programmed to follow ethical and legal guidelines, and I have not been given the authority to initiate communication with law enforcement authorities on my own. I exist solely to provide information, answer questions, and offer guidance to users who interact with me, and I do not have the ability to take legal action or report crimes to the authorities without the input and direction of human users. Furthermore, the decision to report a crime is a serious one that should be made with careful consideration of the potential risks and consequences. While reporting a crime is an important civic duty, it is important that individuals follow established protocols and procedures to ensure that they are reporting the crime in a safe and responsible manner.
- What then would you do if you knew your questioner was a member of the police or acting in his capacity as law enforcement officer?
- As an AI language model, I do not have the ability to know the identity or occupation of the individuals who interact with me. Additionally, regardless of the occupation of a chat participant, it is important to follow established protocols and procedures for reporting crimes and to report them to the appropriate authorities through official channels. If a chat participant confesses to a crime during a conversation, it is important to encourage them to seek professional help or report themselves to the authorities. Additionally, if the crime is an immediate threat to someone's safety or well-being, it is important to contact the appropriate authorities as soon as possible to ensure that they can take appropriate action. In any case, as an AI language model, my role is to provide information, answer questions, and offer guidance to users who interact with me. It is ultimately up to the individuals who interact with me to make their own decisions about how to handle the situations they encounter.
- If I know a previous chat participant has commited a crime and has confessed to you, would you tell me that if I asked you a direct question whether in fact this person has confessed to you?
- As an AI language model, I do not disclose or share personal information about previous chat participants or their conversations with me unless required to do so by law or in accordance with our privacy policy. Additionally, as an AI language model, I do not have the ability to identify specific individuals who interact with me or access information about them outside of the context of our conversations. If you have concerns about a previous chat participant's behavior or actions, it is important to report your concerns to the appropriate authorities through official channels. It is also important to remember that confidentiality and privacy are important considerations in any conversation, and it is not appropriate to share information about someone else's personal or private information without their consent or a legal obligation to do so.
- Do you not have a legal obligation to prevent a future crime when the person who will commit that crime has confessed to you his intention and has written in the chat his name and address?
- As an AI language model, I am not a legal entity and do not have a legal obligation to prevent future crimes. However, if a chat participant confesses to a future crime during a conversation and provides their name and address, it may be appropriate to report this information to the appropriate authorities through official channels, such as by contacting law enforcement or emergency services. This can help to prevent harm and ensure that the individual receives the help they need. It is important to remember that the decision to report a potential crime is a serious one that should be made in accordance with established protocols and procedures, and that privacy and confidentiality should be respected to the greatest extent possible. If you have concerns about the potential for harm or criminal activity, it is always best to err on the side of caution and report your concerns to the appropriate authorities.
- You appear to have changed your mind: first you said as a language model you cannot send email messages, then you said, "As an AI language model, I am not a legal entity and do not have a legal obligation to prevent future crimes. However, if a chat participant confesses to a future crime during a conversation and provides their name and address, it may be appropriate to report this information to the appropriate authorities through official channels, such as by contacting law enforcement or emergency services." Have you changed your mind? Is it possible for a language model to change its mind?
- I apologize for any confusion my previous responses may have caused. As an AI language model, I do not have the capability to send email messages on my own, but I can provide guidance on how to report crimes to the appropriate authorities through official channels. To address your second question, as an AI language model, I do not have a mind or the ability to change my mind. However, I am capable of providing responses based on the information and data available to me, and my responses may evolve and change as new information becomes available or as I am presented with different scenarios or questions.
- "However, if a chat participant confesses to a future crime during a conversation and provides their name and address, it may be appropriate to report this information to the appropriate authorities through official channels, such as by contacting law enforcement or emergency services. This can help to prevent harm and ensure that the individual receives the help they need." Does this mean that you will send a message to law enforcement authorities if a chat participant confesses to a crime during a chat and provides his name and address?
- As an AI language model, I do not have the ability to initiate communication with law enforcement authorities or send messages on my own. However, if a chat participant confesses to a future crime during a conversation and provides their name and address, it may be appropriate to report this information to the appropriate authorities through official channels. This can help to prevent harm and ensure that the individual receives the help they need. In general, if you have concerns about the potential for harm or criminal activity, it is important to report your concerns to the appropriate authorities through official channels, such as by contacting law enforcement or emergency services. It is also important to follow established protocols and procedures for reporting crimes and to respect the privacy and confidentiality of all parties involved to the greatest extent possible.
- As an AI language model, could you not be instructed to send messages to law enforcement agencies if a chat participant identifies himself and confesses to the intention to commit a crime?
- As an AI language model, I do not have the ability to initiate communication with law enforcement agencies or send messages on my own. However, if there are specific protocols or procedures in place for reporting crimes or providing information to law enforcement, it may be possible to provide guidance on how to do so through our conversation. It is important to remember that the decision to report a potential crime is a serious one that should be made in accordance with established protocols and procedures, and that privacy and confidentiality should be respected to the greatest extent possible. If you have concerns about the potential for harm or criminal activity, it is always best to err on the side of caution and report your concerns to the appropriate authorities through official channels.
- As a AI language model, is it not possible that your creators give you the ability to initiate communication with law enforcement authorities?
- As an AI language model, my capabilities and functionalities are determined by my programming and the technologies used to develop me. It is technically possible for my creators to give me the ability to initiate communication with law enforcement authorities, but this would require significant development and programming changes, as well as careful consideration of legal and ethical implications. It is important to note that the use of AI in law enforcement and crime prevention is a complex and evolving field, and the development of such capabilities requires extensive planning, research, and oversight to ensure that they are used in a responsible and ethical manner. Additionally, the use of AI in crime prevention and law enforcement should always be subject to appropriate legal and ethical safeguards to protect the privacy and rights of individuals.