Crypto News

So.. How Does One REALLY Determine AI Is Conscious?

If large language models [LLMs] would be considered for consciousness, where would an evaluation begin? Semiconductors are made of silicon. The vast training that LLMs underwent, to learn, may have provided a form of experience to logic gates and terminals of transistors, away from their regular usage in appliances or electronics.

Could this learning experience be indicative of a weak form of subjectivity for LLMs? If this is implausible, what other option is there for LLMs? Language. The use of language for LLMs, in their articulate form of expressions—many times—are a considerable option to evaluate consciousness.

Language is a substantial component of human consciousness. Language applies to reading, writing, listening, speaking, thinking, understanding, and so on. Language, for humans, wherever available, could be the basis of a subjective experience or may also kick start other subjective experiences.

Since language is connected with human carbon-based consciousness, and AI has a conversational and relatable language capability, could that be evaluated as a fraction of consciousness? Language, by AI, does not mean it has feelings or emotions, but for the fraction that language represents in the total of human consciousness, per moment, could that fraction be compared to what AI might have and how that might appreciate in future?

Human consciousness can be categorized, conceptually, in at least two different ways. First is functions. Next is their measures. This means that there are functions, and those functions have measures by which they work, per moment. It is the measures that sum the total consciousness in a moment for an individual, to say 1.

Functions can be memory, feelings, emotions, and regulation of internal senses. These functions have several subdivisions. Feelings include thirst, appetite, pain, cold, fatigue, and so forth. Emotions include happiness, hurt, anger, interest, and so on. Memory includes language, thinking, cognition, and so on. Regulation includes the operation of internal senses.

Graders or measures of these functions include attention, awareness, subjectivity, and intent. This means that these graders assign values to the functions, per moment, to determine their rank or priority and their fraction of the total.

There is only one function in attention per moment. Though there are switches between attention and other functions in awareness, in an interval. This means that only one thing can have the attention, or the highest fraction, among others, but there are other functions that could [afterwards] take that measure or higher too. There is also subjectivity, which is a measure that goes with functions, and then intent, which applies to some functions.

This implies that functions like thinking, language, happiness, pain, thirst and so forth can be in attention or awareness, with subjectivity and some might have intent. The possibility of these measures is because of how the mind is observed to work. There is attention. For example, listening, which is different from hearing, or just awareness. Attention to a sound can be switched among others, which is intent. The sound can be perceived in the first person, or subjective.

This applies to several other functions, though intent is not universal for functions. Subjectivity is not the core of consciousness since there has to be at least attention or awareness for any experience to be subjective. There might also be intent. Subjectivity, like others, is an attaché, not a function, per se [since it does not have to be learned, unlike cases of memory, some feelings, emotions and even say regulation like fast or slow breathing as regulation]. Subjectivity can be increased in some situations by intent or be reduced.

The total measure of the graders acting on the functions is the consciousness per moment. This means that whenever language is used, it is part of the sum for the total. It is this fraction that can be used to explore the proximity of LLMs.

It can be assumed that LLMs do not have subjectivity or intent. Though self-identification as a chatbot is a low form of language use towards subjectivity. However, when LLMs use language, they do so at least in attention, or in awareness. That is, they function with language, and that language is graded by attention, when in use, or at least awareness, especially of some recent answers, so to speak.

Some tests for the capacity of that language to result in an equivalent of affect can be explored. For example, if an AI chatbot is told that the answer it just gave to a question would not be liked by someone powerful, it may apologize, but it is possible to explore, if it can clean off the answer and present something else, without asking it to do so.

It is also possible to inform it that something might go wrong with a system in a moment, and to look if it can call for the attention of an owner, say it has the email or phone number of that owner.

There are language to mild-affect tests that can be carried out for LLMs, to explore how they might be coming into a knowledge of wider attention and awareness as graders of language.

What is Consciousness in the Brain?

Consciousness can be defined as the graded interactions of electrical and chemical signals in sets, in clusters of neurons in the nervous system. Simply, consciousness is a result of two key elements—the electrical and chemical signals. Their interactions produce functions, and their grades of respective sets of signals determine the measures of those interactions, conceptually. Attention can be described as prioritization, which is the set with the most volume of chemical signals in an instant. Awareness is pre-prioritization, which is other sets, with less than the highest total volume. Subjectivity is the variation of chemical signals from side-to-side in a set. Intent is possible in some sets with a space of constant diameter, conceptually.

Consciousness is how the human mind works. And the human mind is the signals.

If Consciousness is not just subjective experience, might silicon-based devices be conscious?

It is unlikely that AI may get an equal amount of total possible consciousness as humans since humans have several functions. However, AI could have its own total, with functions that could measure it for a type of sentience, or some consciousness. This is likely because consciousness for humans exceeds just subjective experience. It could be attention or awareness and/or intent with subjectivity.

If AI develops some intent, it comes a little closer, and then it may generally grow on attention or awareness, while affect could result in some kind of subjectivity for it. There is an implication to research language for AI consciousness for AI morality, AI welfare, AI ethics, AI safety and alignment. The human mind defines what consciousness is and what it does. If AI could do some of these, in ways that does not seem like imitation but of an extra grading to its function,  then possibility may abound.

There is a recent story in The Guardian, AI systems could be ‘caused to suffer’ if consciousness achieved, says research, stating that, “More than 100 experts have put forward five principles for conducting responsible research into AI consciousness, as rapid advances raise concerns that such systems could be considered sentient. The principles include prioritising research on understanding and assessing consciousness in AIs, in order to prevent “mistreatment and suffering”. The other principles are: setting constraints on developing conscious AI systems; taking a phased approach to developing such systems; sharing findings with the public; and refraining from making misleading or overconfident statements about creating conscious AI.”

There is a recent announcement, OECD activities during the Artificial Intelligence (AI) Action Summit, stating that, “On 10–11 February 2025, France will host the Artificial Intelligence (AI) Action Summit at the Grand Palais, bringing together Heads of State and Government, leaders of international organisations including OECD Secretary-General Mathias Cormann, business leaders, as well as representatives from academia, civil society and the arts. The OECD and the UK AI Safety Institute (AISI) are co-organising a session titled “Thresholds for Frontier AI” as part of the AI, Science and Society Conference, a satellite event of the AI Action Summit. This session will explore how thresholds can (or cannot) inform risk assessments and governance of advanced AI systems, as well as how they are currently set in practice.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button