07-03-2022, 03:47 PM
Half a year later...
I'm posting a few thoughts in a few threads, but other than that I'll probably be mostly inactive here in the future.
zedro, I think we don't have any real chance of understanding one another, but may as well expand on these last thoughts, which are related to what else I'm interested in.
We do not agree on what science is, nor on what pseudoscience is. I also think that your idea, that the real distinction is one of individual conscience vs hierarchically enforced reality, is a false dichotomy. That's because it's not the case that anything goes, and neither the whims of an individual nor the whims of a hierarchy can decide for reality what it is. Fact and logic is above all individuals and above all hierarchies; it is supreme, and for individuals and organizations to seek, and those who make a decent effort to truly honor that can further science, while those who don't are plainly anti-science.
Clear-cut criteria for what is and is not pseudoscience do, in many but not all cases, allow bypassing battles of narratives. Because a few key details, when examined and found to plainly be so or so, answer questions. When that is the case, there is no battle of narratives, except on the part of those seeking to relativize truth and assert falsehoods. Indeed, an excessive focus on narratives is often a diversion from what really matters.
I didn't think of making an "AI God". But I left the description rather sketchy, so the blanks can be filled in in many ways, including that one. If you think Hollywood and/or doomsday alarmism, then an AI God is probably what you'll think of. But there's also far more decentralized options.
But first, where is the seeking of intelligent infinity in the world at large? And I spoke not of replacing such a nonexistent striving of the masses, but of greatly altering the climate of media and technological communication.
Secondly, you ignore the blatant control of perceived truth through hierarchy involved in influencing/propaganda/PR operations. It's become a more relevant issue than ever. I think it would be great to have a way of filtering media (including social media) messages, to level the playing field, so that people are not simply spammed senseless when one or another interest group gains the upper hand by bying its way to it using algorithms and/or people.
Automated propaganda is still fairly crude, basically similar to what commerical spammers do. When things get fancier, it seems to involve still-rough-enough-to-detect deepfaking. Apart from certain governments who've tried to show off recently, like Russia, there's plenty of unscrupulous groups within the Western world. The "alternative" online worlds are in large part basically, on the whole, owned by interests that spam the world for money and power.
There is also automated debunking, which relies on maintaining databases. It's also crude at the moment. Here, dependable insitutions in the future would need to work on such databases in order for software to then be able to use them and work in a basically trustworthy way.
A different automated bullshit detection is also possible. Many rhetorical patterns have been described, and software could be made to, more generally, recognize them. Textbook deception sometimes simply relies on rhetorical tricks, or follows well-documented and very clear-cut patterns. An automated way of checking for that would be very handy. By itself, it would also not require any huge centralized fact check databases.
Bullshit takes much less time and energy to create than to refute. That's why the only sensible way to treat clear-cut text-book BS-mongers is to treat them like spammers. Otherwise, you are plainly asking to have everythign meaningful be drowned out by deceitful noise.
With awesome filtering, people could get some peace and quiet, and time to actually examine messages in more detail. The intellectual level on social medias could rise if filtering was used to make messages which argue in a clear and basically honest way for anything -- anything at all -- more visible than trashy clickbait and thoughtless slogans.
I'm not suggesting mandatory filtering, or anything even remotely like what China does. I'm suggesting free software which everyone can use as they choose. The reason to use it would be, if it works properly, that it becomes a superior way to search for information.
I do wish, though, that societies would eventually grow up to the point where they do not follow charismatic leaders. Direct democracy could be made to work and scale thanks to the Internet. It would "merely" need to be implemented well, securely.
I'm posting a few thoughts in a few threads, but other than that I'll probably be mostly inactive here in the future.
zedro, I think we don't have any real chance of understanding one another, but may as well expand on these last thoughts, which are related to what else I'm interested in.
(12-12-2021, 01:07 PM)zedro Wrote:(12-12-2021, 03:04 AM)Asolsutsesvyl Wrote: I think this pro- vs. anti-science culture war will be a long-term mess....
Especially when the deception is that false dichotomy you present, the real question is, what is science, and who gets to say what it is. It's a battle of narratives driven by a war on individual conscience vs hierarchically enforced reality.
We do not agree on what science is, nor on what pseudoscience is. I also think that your idea, that the real distinction is one of individual conscience vs hierarchically enforced reality, is a false dichotomy. That's because it's not the case that anything goes, and neither the whims of an individual nor the whims of a hierarchy can decide for reality what it is. Fact and logic is above all individuals and above all hierarchies; it is supreme, and for individuals and organizations to seek, and those who make a decent effort to truly honor that can further science, while those who don't are plainly anti-science.
Clear-cut criteria for what is and is not pseudoscience do, in many but not all cases, allow bypassing battles of narratives. Because a few key details, when examined and found to plainly be so or so, answer questions. When that is the case, there is no battle of narratives, except on the part of those seeking to relativize truth and assert falsehoods. Indeed, an excessive focus on narratives is often a diversion from what really matters.
(12-12-2021, 01:07 PM)zedro Wrote:Quote:First I think humanity needs to destroy the power of deceptive rhetoric if it is to be able to increase quality of thought in society and develop worthwhile philosophies much further. I think AI can be an answer to that, actually. Imagine a "spam filter" for manipulative patterns of communication, applied to all political discourse, and to the communications of governments and people in positions of power -- and also to screen all those who offer services and goods they claim will help others. Otherwise, there's the ancient problem that deception is produced much more quickly, and spread much more quickly, than it can be countered, dragging the overall level down terribly. I think technology is the only hope humanity has of growing past that obstacle. If the noise of bad rhetoric can be blocked like spam across society, so it no longer drags minds and hearts down, maybe something like a second enlightenment era and philosophy can then in turn fully develop and have an influence analogous to the first such historical period. Eventually there may be a bridging of the gap to a better spirituality.
So abandon the seeking of intelligent infinity and replaced with an algorithm that is assumed to be neutral and not controlled by anyone, but everyone will be essentially subservient to the "truth for the greater good"? Edit: the AI entity now becomes God, all knowing of truth and judges all.
What you describe is the result of one of 2 choices. I choose the opposite of any Centralized control system, or that of an AI based super-cult. I choose to eat organic lol.
Edit: I also find it interesting that the conscription of an "AI God" is a substitute for personal seeking, the ultimate gatekeeper. What happens to those who choose not to follow this new artificial God? Is this not the same dichotomy as before, the centralization and control of truth through hierarchy, in lieu of acceptance and cooperation?
I didn't think of making an "AI God". But I left the description rather sketchy, so the blanks can be filled in in many ways, including that one. If you think Hollywood and/or doomsday alarmism, then an AI God is probably what you'll think of. But there's also far more decentralized options.
But first, where is the seeking of intelligent infinity in the world at large? And I spoke not of replacing such a nonexistent striving of the masses, but of greatly altering the climate of media and technological communication.
Secondly, you ignore the blatant control of perceived truth through hierarchy involved in influencing/propaganda/PR operations. It's become a more relevant issue than ever. I think it would be great to have a way of filtering media (including social media) messages, to level the playing field, so that people are not simply spammed senseless when one or another interest group gains the upper hand by bying its way to it using algorithms and/or people.
Automated propaganda is still fairly crude, basically similar to what commerical spammers do. When things get fancier, it seems to involve still-rough-enough-to-detect deepfaking. Apart from certain governments who've tried to show off recently, like Russia, there's plenty of unscrupulous groups within the Western world. The "alternative" online worlds are in large part basically, on the whole, owned by interests that spam the world for money and power.
There is also automated debunking, which relies on maintaining databases. It's also crude at the moment. Here, dependable insitutions in the future would need to work on such databases in order for software to then be able to use them and work in a basically trustworthy way.
A different automated bullshit detection is also possible. Many rhetorical patterns have been described, and software could be made to, more generally, recognize them. Textbook deception sometimes simply relies on rhetorical tricks, or follows well-documented and very clear-cut patterns. An automated way of checking for that would be very handy. By itself, it would also not require any huge centralized fact check databases.
Bullshit takes much less time and energy to create than to refute. That's why the only sensible way to treat clear-cut text-book BS-mongers is to treat them like spammers. Otherwise, you are plainly asking to have everythign meaningful be drowned out by deceitful noise.
With awesome filtering, people could get some peace and quiet, and time to actually examine messages in more detail. The intellectual level on social medias could rise if filtering was used to make messages which argue in a clear and basically honest way for anything -- anything at all -- more visible than trashy clickbait and thoughtless slogans.
I'm not suggesting mandatory filtering, or anything even remotely like what China does. I'm suggesting free software which everyone can use as they choose. The reason to use it would be, if it works properly, that it becomes a superior way to search for information.
I do wish, though, that societies would eventually grow up to the point where they do not follow charismatic leaders. Direct democracy could be made to work and scale thanks to the Internet. It would "merely" need to be implemented well, securely.