Home » xyzzy » Dungeon » Politics and AI
Politics and AI [message #98135] |
Tue, 17 December 2024 11:52 |
Kingfish
Messages: 565 Registered: November 2012
|
Illuminati (1st Degree) |
|
|
I'm not anti-AI as much as I'm anti-specific uses. Mainly, the uses that degrade the quality of something that has always been done otherwise by people who don't care whether they're doing the right thing or not.
I'm conflicted.
|
|
|
|
Re: Politics and AI [message #98286 is a reply to message #98285] |
Wed, 29 January 2025 16:32 |
|
Wayne Parham
Messages: 18832 Registered: January 2001
|
Illuminati (33rd Degree) |
|
|
I would argue that the biggest challenges in AI are still the same ones we've had since the 1950s. It isn't political. It's technical.
The Large Language Model is really excellent stuff, but it isn't capable of attaining what many today are calling "AGI" - The "G" being "general" - and the idea being "human like." Some will tell you that current LLM technology is capable of that, but they are in denial.
The disconnect, in my opinion, is actually the fact that the LLM doesn't have an understanding of concepts. Even multi-modal systems - those that consider images, sounds and words - don't put those together as concepts.
Even when an LLM is trained with a lot of source material, using a ton of time and energy, it still doesn't understand the simplest of concepts, like inside versus outside, above versus below or equal versus unequal. It only knows the words, and maybe the pictures or the sounds. It doesn't "connect" those with any concepts.
So that's the first problem.
The second problem is this: Even when a neural network is made capable of that, consider the system it is modeled from: The human brain. We're an incredible machine and our brains are able to do a lot of mental stuff. But a lot of it is still assumptions, guesses, estimations and even fantasies.
Take, for example, the resistance of most people to consider that the Earth was basically spherical and rotated around the sun. Even when faced with a lot of facts that "fit the picture" of the model of the Solar System we all understand today, many people thought the Earth was flat and the Sun rotated around us. Some thought this for hundreds of years after the current Solar System model was pretty much proven.
Our brains tend to "fill in the blanks" of stuff we don't know. And we are prone to being easily influenced. Repeat something enough and it appears as fact. That's just how we're built. It's hard to retrain ourselves counter to this nature.
We're great at training for motor movements. Stuff like walking, riding a bike, gymnastics and other physical activities. We can teach ourselves to do incredible feats by practicing something repeatedly. And we can do that sort of thing with our thought processes too, but that's sort of an awkward way to learn something that is deterministic.
From a computing standpoint, it's kind of weird to have to practice something like mathematics to be good at it. Same thing with learning and reciting facts. We don't just store and retrieve them. Those "facts" we learn have to be repeated to be remembered, and they sometimes morph in our memories over time.
Cool stuff to consider. But odd issues to deal with in a computing system.
So I could kind of care less about politicizing AI. First thing is to get a handle on it from a technical perspective. I love that stuff. Just love it! But right now, a lot of this hype is merely popular discourse.
It's cool to see the public enamored with it now - not just the geeks and the techies. But that's really all it is right now. It's a popular fad. Maybe that's a bit of an understatement - it's more than a fad - 'cause now there are some useful things that it can be used for.
But the "fad" part is the misunderstanding of its capabilities, the public fantasy, really. And of course, the organizations that want to politicize it. They're the most "artificial" intelligence there is.
|
|
|
Re: Politics and AI [message #98287 is a reply to message #98286] |
Thu, 30 January 2025 07:53 |
Rusty
Messages: 1247 Registered: May 2018 Location: Kansas City Missouri
|
Illuminati (3rd Degree) |
|
|
It hasn't been politicized in as much as it was being developed in this country for financialized use. That is what the brouhaha concerns are about. The Chinese simply were able to make an AI app that is better and open source, free. At a fraction of the cost of development our big tech companies were investing in their potential cash cow.
Then the politization comes in with that it's a Chinese information gathering tool. From the Link:
Alarm bells immediately sounded in Washington. US officials claimed the app is a supposed “national security” threat — their favorite excuse to justify imposing restrictions on Silicon Valley’s Chinese competitors.
The US Navy promptly banned DeepSeek, citing “potential security and ethical concerns”.
Starting in Donald Trump’s first term, and continuing through the Joe Biden administration, the US government has waged a brutal technology war and economic war against China.
Washington hit China with sanctions, tariffs, and semiconductor restrictions, seeking to block its principal geopolitical rival from getting access to top-of-the-line Nvidia chips that are needed for AI research — or at least that they thought were needed.
DeepSeek has shown that the most cutting edge chips are not necessary if you have clever researchers who are motivated to innovate.
This realization unleashed pandemonium in the US stock market.
In just one day, Nvidia shares fell 17%, losing $600 billion in market cap. This was the largest one-day drop in the history of the US stock market.
It just shows how hypocritical our politics are and redundant our industry has become to the level of what the once third world has excelled in all aspects. Our exceptionalism is a joke.
|
|
|
Re: Politics and AI [message #98288 is a reply to message #98287] |
Thu, 30 January 2025 09:25 |
|
Wayne Parham
Messages: 18832 Registered: January 2001
|
Illuminati (33rd Degree) |
|
|
Rusty quoted "Ben Norton" Thu, 30 January 2025 07:53...not necessary if you have clever researchers who are motivated to innovate...
This "Ben Norton" fellow hurls a lot of insults, mostly attacking groups of people, without talking much about technology. This kind of stuff annoys me because it is all about personalities and nothing about principles.
There are clever people all over the world. There are also noisy people that aren't so clever, all over the world.
As for me, I've studied and implemented all kinds of computing systems - many that fall into the realm of artificial intelligence - since the 1970s. My personal heros are a list of "who's who" in the field. Guys like Alan Turing, John von Neumann, Frank Rosenblatt, Douglas Hofstadter and Marvin Minsky head the list, and those are just a start. I have practically every book written by them, some dating back to the 1950s. So I know a lot about the history, the current technologies and the potential futures of AI. It's one of my biggest passions.
Sorry if that sounds like "pulling rank," although it is me kind of doing that.
My point is this: Most people throwing opinions around about these technologies sound like school children to me. They have almost no understanding or experience.
And then when they march it into some kind of political agenda, it's even worse.
|
|
|
|
Re: Politics and AI [message #98290 is a reply to message #98289] |
Thu, 30 January 2025 16:42 |
|
Wayne Parham
Messages: 18832 Registered: January 2001
|
Illuminati (33rd Degree) |
|
|
Truthfully, there are a lot of completing ideas in AI, and they aren't aligned with governments or politics or finance. The competing ideas are about things like how to build a network, its topologies, its layers and its flow.
I don't see the relevant issues surrounding the DeepSeek systems (and many other new AI developments) as being American versus Chinese. I don't see them as Capitalist versus Socialist. I don't even see them as proprietary versus open architecture.
In fact, "open source" doesn't really make sense for a neural network. Source code is programing rules and neural networks don't have them. So you can't have "open" source for a neural network because it doesn't have source code at all.
Neural networks are trained.
There is support code for interacting with the neural network. That can be made open source, but it's actually trivial. And you can distribute the weight values of the trained model - the actual "brain" of the network - but that's not an open architecture. It's a result of training, and distributing results is how proprietary systems are distributed.
So unless organizations publish all the training materials and the procedures they choose to load them and optimize them - what is published isn't at all "open source."
That's what you see in the DeepSeek models.
Not knocking them at all. Just saying that's how this works.
Techies aren't the ones talking about this primarily from a political or financial perspective. Other people - the talking heads - are the only ones making a fuss. Those are the ones that annoy me, basically because they either don't know what they're talking about or they're spinning things or most likely both.
Open architectures - both in hardware and software - just mean the internals are open for viewing and inspection. They aren't hidden or protected. There are licensing agreements that can be used to maintain intellectual property rights or to extend them in a limited fashion, or the open source stuff can be simply made free for anyone to use in any way. But it is distinguished from proprietary material by the nature of not being a "black box" that you can't see inside.
This whole mechanism - the description "open source" - has no meaning in a neural network. You don't program them with human-readable rules. Instead, you train them. They bias themselves, generating a lot of internal values called "weights." The collection of weight values isn't the same thing as source code, so you can't really call this "open source." It has no meaning here.
A neural network is definitely a black box.
That's one of the difficulties when working with neural networks. It's inherently hard to "look inside" a trained network to understand what sort of "reasoning" it applied to make a "decision."
So when a model is distributed - even openly - it's hard to see what you've got. You certainly cannot determine what the training was. That part is definitely not "open." Using "open source" to describe a neural network is non-sequitur. Makes no sense.
I don't really care about the semantics, normally, but it does make an interesting problem that I do care about. It is very hard to troubleshoot a neural network because it's not a state-engine or a rules-engine or any other kind of deterministic set of rules.
It also makes testing them difficult. You pretty much have to just run the model, try it out, and hope for the best. But the only things you can know for sure are the things you've asked it. You cannot know for sure how it will respond to things you haven't asked and you cannot even know that it will respond the same way twice.
As for the need for GPU and FPU chips made by companies like Nvidia and others, that's also an interesting technical problem that has understandable repercussions. Changes have been coming for a long time.
For precision mathematical calculations, one needs a good floating point processor or FPU. Lots of "traditional computers" have an integer processor that's separate from a floating point processor. Been that way since the early 1960s. Became really popular in the 1980s. Use a floating point processor for doing math, where fractional values are needed.
Similarly, once computers were used for graphics, it was helpful to have a separate graphics processor or GPU. It had its own memory and processor, because screen operations require a lot of memory as a "map" for what's displayed on the screen, and the GPU processor can do the math for making lines and curves and even 3D operations. That stuff needs an FPU, but when used specifically for graphics, it's called a GPU.
Segue to neural networks: The "weights" in the network have fractional values. So while a neural network could be built from the ground-up, using digital neurons, they are generally simulated using existing chips having a von Neumann architecture. When doing so, the floating point variety is chosen. That's why FPU and GPU chips have been selling like hotcakes in the AI world.
One thing that's bugged all of us for the last decade or so is that while the network weights are fractional, they don't benefit at all from high-precision. They work just fine with low-precision values. So a lot of calculation work is done by the floating point processor that isn't needed. It has been clear for quite some time that a specific type of low-precision FPU chip is desirable for neural networks. It's been brewing for years, and you see some of that poking out right now.
Most technical people using neural networks understand this, but "using what we've got" made the use of FPU and GPU processors convenient. That's why makers of FPU and GPU chips got a boost in sales for a while. It's also what has spawned the new breed of "AI chips." They just have limited floating point processors. They don't need the precision for what they do.
So it isn't political. It's just an evolution of the state of the art.
|
|
|
|
Re: Politics and AI [message #98292 is a reply to message #98291] |
Fri, 31 January 2025 09:27 |
|
Wayne Parham
Messages: 18832 Registered: January 2001
|
Illuminati (33rd Degree) |
|
|
For decades, AI wasn't useful in business. In fact, one of my favorite researchers, Melanie Mitchell, a lady that studied under Douglas Hofstadter, said that it was suggested to her in 1990 that she refrain from putting AI on her resume.
One exception, in those days, was a technology called "expert systems," which were essentially rules engines, a collection of human-readable rules that codified intelligence on a particular subject.
Another exception is called Monte Carlo simulations, and those were used for predictive analysis.
But neural networks and genetic algorithms - two very important types of information processing used in the field of AI - were largely ignored in business because results generated by those kinds of systems were seen as too unreliable.
The first places I saw these approaches become useful in business were in things like optical character recognition and language translation. They were good at dealing with ambiguities and what I would call "messy data." I also started seeing these kinds of statistical processing approaches used in data warehouse analysis and in search algorithms.
That's when these technologies started being attractive to business, and so naturally, that's when AI started being more heavily funded.
Before that, you only saw AI technologies in universities and research organizations.
|
|
|
|
Goto Forum:
Current Time: Wed Feb 05 09:45:43 CST 2025
|