by Dan » Tue Apr 24, 2012 10:09 pm
by Dan » Tue Apr 24, 2012 10:10 pm
by WCMeyer » Tue Apr 24, 2012 11:46 pm
by Kazon Nystrøm » Wed Apr 25, 2012 12:40 am
by de officiis » Wed Apr 25, 2012 6:04 am
by Kolokol888 » Wed Apr 25, 2012 7:02 am
by exposno1 » Wed Apr 25, 2012 7:16 am
by Thomas the doubtful » Wed Apr 25, 2012 7:31 am
by Dan » Wed Apr 25, 2012 7:44 am
Shouldn't the discussion of such threats include (and conclude) that handing yet more power to fewer and fewer intellectuals has almost always yielded a grinding down of individual rights and put a governor on our prosperity . . . i.e., that it isn't worth the cost?
Hell, didn't Dan himself survive and thrive in direct contradiction to what the "experts" told him? "Experts" don't know Dan's business/customers and they don't know what's best for each individual. "Experts" and popular opinion will give you conventional wisdom which eventually succumbs to progress--but not without a lot of waste along the way. Politicians and intellectuals will give you the waste while the rest of us figure ways around the their nonsense. Why not forgo the first part? Why not stop calling for politicians to solve our problems (I contend they'll create yet more, without solving the initial ones--largely created by these very same politicians) and ask them to start releasing more and more decision-making to "The People," as in individuals or groups working in cooperation, rather than those forced or guided through coercion?
I say leave me with threat of the nanobots and keep me from under the thumb of the intellectuals who are looking for yet another excuse to lord over me. Or must I yield to the will of the short-sighted and panicky majority again?
by Dan » Wed Apr 25, 2012 8:02 am
Thomas the doubtful wrote:I loved it. YOU could say it came me a intellectual boner : I didn't agree on everything, but I loved it. But when you told about "down with everything" article and people should "rise up" (its late at night) correct me if I am wrong, but you sounded like you want more direct democracy....fine, But thus could come the "tyranny of the majority" witch its not the best option, now is it?
by Dan » Wed Apr 25, 2012 8:22 am
These doomsday prophets like Bostrom really scare me with their overactive immagination. A global thermonuclear war would not be the end of the species. We've been through this before i.e. the plague and we came through that just fine. Nature, and we are part of nature, has an unbelievable capacity for adaption. If bees go extinct nature will find some other way of polinating or different species of plants will take over - no big deal.
We've heard all this before like when cloning was developed, and what became of that? Nothing. We forbad cloning, which shows you that we do infact have the mechanisms to put the black ball back in the bag - it's just a matter of choice.
I think the trap is thinking that the way things are is the way things have to be, which is not the case. We can't percieve of a non fossil fuel based industrial revolution because that's not the way it was... but it could have been. We can't percieve of flowers without bees because we have bees... that doesn't mean bees must exist for flowers to exist.
Governments are not the species and the gridlock of Washington has no bearing on the future of the species. The cataclysmic outcome, if the american political system (or the global for that matter) continues to fail, will be no more US of A... that's all. But the american people will still be there and life will go on. Dan mentions the culture revolutions of 60's and 80's and I say; look that was just a few decades ago, people can change really fucking fast. If we have to we can adapt, the fact we don't adapt before we have to doesn't mean we're incapable.
It's facinating to think of all the things that could go wrong, but if your thinking about it, chances are you will also be prepared for when the time comes.
by Thomas the doubtful » Wed Apr 25, 2012 8:44 am
Dan wrote:Thomas the doubtful wrote:I loved it. YOU could say it came me a intellectual boner : I didn't agree on everything, but I loved it. But when you told about "down with everything" article and people should "rise up" (its late at night) correct me if I am wrong, but you sounded like you want more direct democracy....fine, But thus could come the "tyranny of the majority" witch its not the best option, now is it?
We need to replace the corrupt people that are in power now. That's not the same as replacing the representative system. You would still have representatives, they would just be new people (there's a famous scene in the remake of The Untouchables where a corrupt jury bought off by Al Capone is replaced in its entirety before the trial begins, thereby thwarting Capone's attempts to rig his trial. THAT'S what I was suggesting).
It's why I went over in the show how we then would go about electing people who WOULD represent us...via things like public donations to campaigns ("we all own a share of stock" in getting the candidate elected, etc.) and social networking to help replace TV ads...because otherwise once you throw the current bums out, would would simply be replacing them with new bums if you didn't elect real alternatives.
Hope that helps clarify...
by StCapps » Wed Apr 25, 2012 11:01 am
Funny thing is I just finished watching that TNG episode like an hour before you made this post.Kazon Nystrøm wrote:Good show, made me think of this-
Some will get it.
by exposno1 » Wed Apr 25, 2012 11:33 am
They can serve a great purpose, just not through the power of the government. The system is broken--you sold me a shirt that says so, why would I embrace the idea of the system being used to implement these ideas?Dan wrote:We are run by idiots right now. Does that mean that any expert advice will simply be filtered through the idiots, thereby losing all the value to us, while using the rational of the intelligent merely for their own purposes? If that's the case, experts and highly intelligent people serve little purpose in our system (which dooms it, and maybe us if the worst predictions of these people ever come to pass).
I see them as having roles in the private sector--suggesting solutions to the problems from think tanks, corporations, charities, privately-funded universities and the like--through non government means.Dan wrote:We have evolved to deal with certain types of situations. The scenarios we are likely to face more and more due to our technological level right now are not things we are/have evolved to deal with in the past. The same applies on the macro-societal level too. Our systems (government for example) are not set up to deal with these sorts of things very well (perhaps making us see only the A and B choices listed above as opposed to any reasonable compromise).
Ex...in this sort of case, what do you see the role of "experts" and deep thinkers and scientists being?
by Dan » Wed Apr 25, 2012 12:18 pm
exposno1 wrote:They can serve a great purpose, just not through the power of the government. The system is broken--you sold me a shirt that says so, why would I embrace the idea of the system being used to implement these ideas?Dan wrote:We are run by idiots right now. Does that mean that any expert advice will simply be filtered through the idiots, thereby losing all the value to us, while using the rational of the intelligent merely for their own purposes? If that's the case, experts and highly intelligent people serve little purpose in our system (which dooms it, and maybe us if the worst predictions of these people ever come to pass).
Well, we DID include ideas for fixing the broken system in that show too, right? The plan was not to have a "Council of the Wise" advising Harry Ried and Mitch McConnell, but the non-corrupt people that our "American Spring" and "Dan Carlin buy a share of your representative" plan (for lack of better labels ) would help bring into office. Get rid of the bad people...give them as much wise advice as you can. I'm having a problem not seeing the 'wisdom" in that. What's the opposite of that?
I would submit that the unseen efforts of the experts and highly intelligent being brought to us through cooperative channels is much more efficient and responsive to our needs. That's a sort of "faith" in cooperative efforts and markets, yes,
But the point that all these people like Bostram are making (in books on the potential for catastrophic risks) is that doing what we do best (the sort of innovation that Burke talks about in The Axemaker's Gift) is likely to bring very dangerous situation upon us very quickly. Thinking about those situation now gives us options to deal with them. Trying to deal with them once they occur has a very good chance of not being possible. If the "cooperative market effort" between, say, the Defense Dept. and a nanotechnology or AI company gives us a "black ball" scenario, whose going to stop them before it starts? Whose even going to know about it? It will be secret...both for National Security and corporate reasons...and very possible that no one will be there to tell them "Wait, this may be a very, very dangerous idea...maybe we shouldn't do it."
I mean, some of this stuff is easily as potentially dangerous as nuclear weapons. How comfortable would any of us be with "The (commercial) Market" having control over those?
Boethis, in another thread, had a great line (I think he quoted it...but it's great anyway) "We are like children playing with handguns". If true (and I think it is) what sort of risk mitigation is prudent?
In answer to that very question, from the Atlantic Article where Bostram is basically asked, "Why devote any resources to this since the threat is so unknown and amorphous?"
Some have argued that we ought to be directing our resources toward humanity's existing problems, rather than future existential risks, because many of the latter are highly improbable. You have responded by suggesting that existential risk mitigation may in fact be a dominant moral priority over the alleviation of present suffering. Can you explain why?
Bostrom: Well suppose you have a moral view that counts future people as being worth as much as present people. You might say that fundamentally it doesn't matter whether someone exists at the current time or at some future time, just as many people think that from a fundamental moral point of view, it doesn't matter where somebody is spatially---somebody isn't automatically worth less because you move them to the moon or to Africa or something. A human life is a human life. If you have that moral point of view that future generations matter in proportion to their population numbers, then you get this very stark implication that existential risk mitigation has a much higher utility than pretty much anything else that you could do. There are so many people that could come into existence in the future if humanity survives this critical period of time---we might live for billions of years, our descendants might colonize billions of solar systems, and there could be billions and billions times more people than exist currently. Therefore, even a very small reduction in the probability of realizing this enormous good will tend to outweigh even immense benefits like eliminating poverty or curing malaria, which would be tremendous under ordinary standards.
but no less faithful than embracing the idea that centralized authorities will ignore their own interests and interests of their benefactors (NOT "The People") to somehow make good choices. When does that work? Even if they were "good" people (they aren't), they just can't plan for each of us without trying to plan for all of us--which will be against our will in many instances and full of costly unintended consequences.
And when they're wrong, we have to live with whatever b.s. monstrosity has been created, for decades to come--these mechanisms rarely go away, even when they're proven to be fallacious.
What show was it where you kept saying the unspoken close to each of these science arguments is, ". . . and something should be done about it!"? You're suggesting we unleash a haystack of junk science upon us to solve what, at most would be, a few needles of problems. How could we possibly know the needles from the hay and then put the right resources on those particular ones. Wouldn't it be more likely that we spread the money amongst the hay, thereby not really preparing for anything?
Your argument that "there won't be enough time" is the argument always used. It's smart, it can't be defended, because who can know? It's the Jack Bauer argument. I can only counter that, "there's usually plenty of time." Not comforting, but it has to be our answer, else we risk shooting off into the dark, hoping to hit something.
Well, when we did the whole show (that we scrapped) on the Bostram/Bees stuff, I got into the idea of "Do we really want rushed science?"...so I understand what you are saying. I have, as you said, pointed out the downsides before. But, as a person who knows well my struggles with the Climate Change issue and how that affects rights, individual choice, empowers unelected bodies, etc. you know that I have thought about this. There are some serious things that need to be reconciled. One cannot just say: "Screw it...most of this stuff is scare tactics...the government will use it to control us all...the legislators will muck it up anyhow..." as reasons to do nothing and or not listen to what scientists say (especially if enough of them say it). One is tempted (especially if they have a podcast with a certain title ) to suggest common sense balance is what's needed. One of the things that makes this idea so vexing is that it is one that defies this idea by the nature of the (potential) threat.
From another of Bostram's papers...talking about how different the threat is (when discussing this unstable period he and others see us in right now) versus what we have tradtionally evolved to deal with:
2 The unique challenge of existential risks
Risks in this sixth category are a recent phenomenon. This is part of the reason why it is useful to distinguish them from other risks. We have not evolved mechanisms, either biologically or culturally, for managing such risks. Our intuitions and coping strategies have been shaped by our long experience with risks such as dangerous animals, hostile individuals or tribes, poisonous foods, automobile accidents, Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts, World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS. These types of disasters have occurred many times and our cultural attitudes towards risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people immediately affected, in the big picture of things – from the perspective of humankind as a whole – even the worst of these catastrophes are mere ripples on the surface of the great sea of life. They haven’t significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species.
With the exception of a species-destroying comet or asteroid impact (an extremely rare occurrence), there were probably no significant existential risks in human history until the mid-twentieth century, and certainly none that it was within our power to do something about.
The first manmade existential risk was the inaugural detonation of an atomic bomb. At the time, there was some concern that the explosion might start a runaway chain-reaction by “igniting” the atmosphere. Although we now know that such an outcome was physically impossible, it qualifies as an existential risk that was present at the time. For there to be a risk, given the knowledge and understanding available, it suffices that there is some subjective probability of an adverse outcome, even if it later turns out that objectively there was no chance of something bad happening. If we don’t know whether something is objectively risky or not, then it is risky in the subjective sense. The subjective sense is of course what we must base our decisions on. At any given time we must use our best current subjective estimate of what the objective risk factors are.
A much greater existential risk emerged with the build-up of nuclear arsenals in the US and the USSR. An all-out nuclear war was a possibility with both a substantial probability and with consequences that might have been persistent enough to qualify as global and terminal. There was a real worry among those best acquainted with the information available at the time that a nuclear Armageddon would occur and that it might annihilate our species or permanently destroy human civilization. Russia and the US retain large nuclear arsenals that could be used in a future confrontation, either accidentally or deliberately. There is also a risk that other states may one day build up large nuclear arsenals. Note however that a smaller nuclear exchange, between India and Pakistan for instance, is not an existential risk, since it would not destroy or thwart humankind’s potential permanently. Such a war might however be a local terminal risk for the cities most likely to be targeted. Unfortunately, we shall see that nuclear Armageddon and comet or asteroid strikes are mere preludes to the existential risks that we will encounter in the 21st century.
The special nature of the challenges posed by existential risks is illustrated by the following points:
· Our approach to existential risks cannot be one of trial-and-error. There is no opportunity to learn from errors. The reactive approach – see what happens, limit damages, and learn from experience – is unworkable. Rather, we must take a proactive approach. This requires foresight to anticipate new types of threats and a willingness to take decisive preventive action and to bear the costs (moral and economic) of such actions.
· We cannot necessarily rely on the institutions, moral norms, social attitudes or national security policies that developed from our experience with managing other sorts of risks. Existential risks are a different kind of beast. We might find it hard to take them as seriously as we should simply because we have never yet witnessed such disasters. Our collective fear-response is likely ill calibrated to the magnitude of threat.
· Reductions in existential risks are global public goods  and may therefore be undersupplied by the market . Existential risks are a menace for everybody and may require acting on the international plane. Respect for national sovereignty is not a legitimate excuse for failing to take countermeasures against a major existential risk.
· If we take into account the welfare of future generations, the harm done by existential risks is multiplied by another factor, the size of which depends on whether and how much we discount future benefits [15,16].
In view of its undeniable importance, it is surprising how little systematic work has been done in this area. Part of the explanation may be that many of the gravest risks stem (as we shall see) from anticipated future technologies that we have only recently begun to understand. Another part of the explanation may be the unavoidably interdisciplinary and speculative nature of the subject. And in part the neglect may also be attributable to an aversion against thinking seriously about a depressing topic. The point, however, is not to wallow in gloom and doom but simply to take a sober look at what could go wrong so we can create responsible strategies for improving our chances of survival. In order to do that, we need to know where to focus our efforts.
snipI see them as having roles in the private sector--suggesting solutions to the problems from think tanks, corporations, charities, privately-funded universities and the like--through non government means.Dan wrote:We have evolved to deal with certain types of situations. The scenarios we are likely to face more and more due to our technological level right now are not things we are/have evolved to deal with in the past. The same applies on the macro-societal level too. Our systems (government for example) are not set up to deal with these sorts of things very well (perhaps making us see only the A and B choices listed above as opposed to any reasonable compromise).
Ex...in this sort of case, what do you see the role of "experts" and deep thinkers and scientists being?
My support for this route are examples like:
i) the new World Trade Center building which I'm staring at out my window--how long did building this take when the public and government agencies played such a big role (admittedly, this is Port Authority property--but that just adds to the argument/question of why it hasn't been privatized); and
ii) what about our space program--getting our next unmanned vehicles to Mars will take decades if we await NASA, but could happen in a few years if we privatize it (again complex because there has to be gov't involvement here, as well--but largely because the gov't claims domain over space).
Those are things that are wanted and pursued at great cost and with great effort . . . and with great waste of time and resources.
Your idea is just engaging us in yet more preemptive war(s) . . . albeit on a different front. How much sheer waste do we need as a result of fears over things like: bird flus, WMD's, ice ages, terrorist threats, global meltdowns, communist sympathizers and the like?
We need fewer of these, not more!