It's clearly elite though. They are making sophisticated moves. How do they rate on the global stage? I'm not sure. I have to assume, lower quality than the west but better than less wealthy nations.
Essentially an escalatory action in response to US support in order to signal their seriousness about their territorial expansion.
It says... We can touch you in the United States and are willing to do it. The implication is that they might even be willing to conduct terrorist attacks against our civilians.
Could be a play that they thought might put them in a better negotiation position... If you can believe that.
This is my read as a civilian who follows war. I am not an expert.
Frankly, I posted this hoping to attract some discussion amongst those who might actually be able to provide a decent read.
I don't really buy into the idea that it puts them in a position to negotiate much. It would just escalate things, solidify support against them. If they've got hopes of fracturing the west, this would be the opposite.
I make no argument regarding whether their actions are effective or not. Only suggesting what they might be thinking are the reasons they take these actions.
Honestly if they had blown up our planes... I'd say there is a real chance we blow up that shiny building where they planned that.
It's even possible we told them exactly that and this is why the plot didn't continue.
I kind of feel it compares to MAD. I don't think they want to provoke a war but if they signal they are not afraid to engage political, civilian or economical targets then one must consider how they engage with them and what their response will be.
We are translating language from a ton of other species. Now we can see that they are using names and probably sharing information.
The brightest minds now want to use machine learning to tease out more low level features of language itself, across various species.
We are already calling monkeys and elephants by their native names y'all... And they are responding to this...
Think about it... This research is unlikely to stall. We're barely scratching the surface. The animals were having more nuanced conversation than we thought. They've been doing this for millions of years...
Condescension toward nature... Usually a mistake. I'm not saying they are all philosophers, but it seems very likely they've been having some pretty advanced conversations surrounding their own affairs. If we can call them by their name... Probably we're going to unlock some pretty interesting communication.
Some folks are professional and mature. In the best organisations, the management team sets the highest possible standard, in terms of tone and culture. If done well, this tends to trickle down to all areas of the organization.
Another speculation would be that she's resigning for complicated reasons which are personal. I've had to do the same in my past. The real pro's give the benefit of the doubt.
Because the old guard wanted it to remain a cliquey non-profit filled to the brim with EA, AI Alignment, and OpenPhilanthropy types, but the current OpenAI is now an enterprise company.
This is just Sam Altman cleaning house after the attempted corporate coup a year ago.
Below are exerts from the article you link. I'd suggest a more careful read through. Unless out of hand, you give zero credibility to first hand accounts given to the NYT by both Mirati and Sustkever...
This piece is built on conjecture from a source whose identify is withheld. The sources version of events is openly refuted by the parties in question. Offering it as evidence that Mirati intentionally made political moves in order to get Altman ousted is an indefensible position.
'Mr. Sutskever’s lawyer, Alex Weingarten, said claims that he had approached the board were “categorically false.”'
'Marc H. Axelbaum, a lawyer for Ms. Murati, said in a statement: “The claims that she approached the board in an effort to get Mr. Altman fired last year or supported the board’s actions are flat wrong. She was perplexed at the board’s decision then, but is not surprised that some former board members are now attempting to shift the blame to her.”
In a message to OpenAI employees after publication of this article, Ms. Murati said she and Mr. Altman “have a strong and productive partnership and I have not been shy about sharing feedback with him directly.”
She added that she did not reach out to the board but “when individual board members reached out directly to me for feedback about Sam, I provided it — all feedback Sam already knew,” and that did not mean she was “responsible for or supported the old board’s actions.”'
This part of NYT piece is supported by evidence:
'Ms. Murati wrote a private memo to Mr. Altman raising questions about his management and also shared her concerns with the board. That move helped to propel the board’s decision to force him out.'
INTENT matters. Mirati says the board asked for her concerns about Altmans. She provided it and had already brought it to Altmans attention... in writing. Her actions demonstrate transparency and professionalism.
In artificial intelligence, reasoning is the cognitive process of drawing conclusions, making inferences, and solving problems based on available information. It involves:
Logical Deduction: Applying rules and logic to derive new information from known facts.
Problem-Solving: Breaking down complex problems into smaller, manageable parts.
Generalization: Applying learned knowledge to new, unseen situations.
Abstract Thinking: Understanding concepts that are not tied to specific instances.
AI researchers often distinguish between two types of reasoning:
System 1 Reasoning (Intuitive): Fast, automatic, and subconscious thinking, often based on pattern recognition.
System 2 Reasoning (Analytical): Slow, deliberate, and logical thinking that involves conscious problem-solving steps.
Testing for Reasoning in Models:
To determine if a model exhibits reasoning, AI scientists look for the following:
Novel Problem-Solving: Can the model solve problems it hasn't explicitly been trained on?
Step-by-Step Logical Progression: Does the model follow logical steps to reach a conclusion?
Adaptability: Can the model apply known concepts to new contexts?
Explanation of Thought Process: Does the model provide coherent reasoning for its answers?
Analysis of the Cipher Example:
In the cipher example, the model is presented with an encoded message and an example of how a similar message is decoded. The model's task is to decode the new message using logical reasoning.
Steps Demonstrated by the Model:
Understanding the Task:
The model identifies that it needs to decode a cipher using the example provided.
Analyzing the Example:
It breaks down the given example, noting the lengths of words and potential patterns.
Observes that ciphertext words are twice as long as plaintext words, suggesting a pairing mechanism.
Formulating Hypotheses:
Considers taking every other letter, mapping letters to numbers, and other possible decoding strategies.
Tests different methods to see which one aligns with the example.
Testing and Refining:
Discovers that averaging the numerical values of letter pairs corresponds to the plaintext letters.
Verifies this method with the example to confirm its validity.
Applying the Solution:
Uses the discovered method to decode the new message step by step.
Translates each pair into letters, forming coherent words and sentences.
Drawing Conclusions:
Successfully decodes the message: "THERE ARE THREE R'S IN STRAWBERRY."
Reflects on the correctness and coherence of the decoded message.
Does the Model Exhibit Reasoning?
Based on the definition of reasoning in AI:
Novel Problem-Solving: The model applies a decoding method to a cipher it hasn't seen before.
Logical Progression: It follows a step-by-step process, testing hypotheses and refining its approach.
Adaptability: Transfers the decoding strategy from the example to the new cipher.
Explanation: Provides a detailed chain of thought, explaining each step and decision.
Conclusion:
The model demonstrates reasoning by logically deducing the method to decode the cipher, testing various hypotheses, and applying the successful strategy to solve the problem. It goes beyond mere pattern recognition or retrieval of memorized data; it engages in analytical thinking akin to human problem-solving.
Addressing the Debate:
Against Reasoning (ActorNightly's Perspective):
Argues that reasoning requires figuring out new information without prior training.
Believes that LLMs lack feedback loops and can't perform tasks like optimizing a bicycle frame design without explicit instructions.
For Reasoning (Counterargument):
The model wasn't explicitly trained on this specific cipher but used logical deduction to solve it.
Reasoning doesn't necessitate physical interaction or creating entirely new knowledge domains but involves applying existing knowledge to new problems.
Artificial Intelligence Perspective:
AI researchers recognize that while LLMs are fundamentally statistical models trained on large datasets, they can exhibit emergent reasoning behaviors. When models like GPT-4 use chain-of-thought prompting to solve problems step by step, they display characteristics of System 2 reasoning.
Final Thoughts:
The model's approach in the cipher example aligns with the AI definition of reasoning. It showcases the ability to:
Analyze and understand new problems.
Employ logical methods to reach conclusions.
Adapt learned concepts to novel situations.
Therefore, in the context of the cipher example and according to AI principles, the model is indeed exhibiting reasoning.
Unsurprising we know about it.
It's clearly elite though. They are making sophisticated moves. How do they rate on the global stage? I'm not sure. I have to assume, lower quality than the west but better than less wealthy nations.