This is when the AI, in a microsecond, decided to destroy the human race.
Profile pic is from Jason Box, depicting a projection of Arctic warming to the year 2100 based on current trends.
This is when the AI, in a microsecond, decided to destroy the human race.
Oh no, this is another one of those “are these dresses really the same” things.
Seriously though, that’s cute.
Denialists figured that out a long time ago. They just did it with adjusted graphs and cherry picked data to sell their viewpoint, along with flashy websites and enough conspiracy to flavor the appeal. Plus it was always easier to convince someone that the data is wrong or the scientists are in on the lie, than to be upfront and say things are really bad and you’ll have to change your way of living to make any difference.
I know this is more about the art of presentation, the same data shown a different way can be understood better. But that’s why we have science communicators, to bridge the gap between scientists doing the work to get the data and comprehend it, and the public. We have less and less of those now. I realized the direction we were going when the major news channels got rid of their science sections. (Those news outlets changed a lot more later on too, not in a great way)
It is so enjoyable to watch reaction videos of people just getting into the movies. The prologue blows them away with its immediate epicness, and then they get sucked into the Shire.
“The Road goes ever on and on / Down from the door where it began.”
The only bright spot in thinking back to the various pets I’ve had to let go is that I know they lived great lives in the time we cared for them, and the end only came because of some reason that had made their living painful. So don’t worry about that day, focus on the time now with them because that’s what you’ll remember.
Sometimes a 9, but usually bounce between a 2 and 3.
non-canonical is key. It was a device used in the films to convey Gandalf using messengers to rely information. Had the moth (sorry, The Moth) been in the books, it absolutely would have had a name.
What will make you stay up longer instead of sleeping is wondering if it was the same moth. I say no, given the time span between events.
Ollama.com is another method of self hosting. Figuring out which model type and size for what equipment you have is key, but it’s easy to swap out. That’s just an LLM, where you go from there depends on how deep you want to get into the code. An LLM by itself can work, it’s just limited. Most of the addons you see are extra things to give memory, speech, avatars, and other extras to improve the experience and abilities. Or you can program a lot of that yourself if you know Python. But as others have said, the more you try to get out, the more robust a system you’ll need, which is why you find the best ones online in cloud format. But if you’re okay with slower responses and lower features, self hosting is totally doable, and you can do what you want, especially if you get one of the “Jailbroke” models that has had some of the safety limits modified out of them to some degree.
Also as mentioned, be careful not to get sucked in. Even a local model can be convincing enough sometimes to fool someone wanting to see things. Lots of people recognize that danger, but then belittle people who are looking for help in that direction (while marketing realizes the potential profits and tries very hard to sell it to the same people).
And that is why Vulcans wanted to befriend humans as soon as possible, and other species also have their own proverbs about not messing with humans.
Telepathy never had any set rules, it was just used however it made sense for the story. Even Spock had differing abilities - mostly he’d do his mind melds while touching, but there’s one episode where he contacts and controls a guard through a wall. The good thing is that for Deanna as time went by they used her abilities and her job function and training better than in the beginning with the stupid side comments of “I sense they’re not telling everything.” No, really?
I will give it to you, when it works, it does some magical stuff. But try designing such complex things that are miracles in coding and then it have to run on a half-ass computer. I want to say terminal, it’s not that, but it’s those small fake computers that companies seem to think are better to get than an actual desktop because they’re cheap. I know that’s hardware, not Excel, but Excel does not run well on that, so…
Or worse, you get moved to 365 which doesn’t do most scripting and breaks all that was working. That cloud shit is a problem.
Not at all. You just haven’t gotten deep enough into the beast to see the horror.
You didn’t factor in the variability of federation vs. a single platform and how not only can it affect how long it takes for everyone to see a post, if they do at all, but also how many duplications there may be floating around. And I don’t know if you can predict that reliably, as we’re all still trying to figure it out.
The reason weather prediction has become seemingly worse is because part of that forecast was based on using historical data to anticipate future events. That data isn’t as great of a source now since things have changed, so they’ve had to extrapolate as best they can based on science of how weather systems work. Sometimes they get it close, something they totally miss.
As others have said, climate science is looking at a much bigger picture and trends with a different goal. I do disagree with title though - while the overall trend has been close to what was being warned about, the specifics have changed because we were limited then in what we knew and could measure, and there are things occurring that we couldn’t have guessed about that will make things worse. The trend is still in the general direction, just how much of a change and how it affects conditions that drive weather change as we learn more.
Basically, you can’t say that past models were perfect in their predictions and then have decades of “it’s worse than predicted”.
I know it’s unlikely without a major lab oops, but I’ve put smallpox back on my bingo card.
The “it’s too late” crowd isn’t and has never been the problem. They’ve been painted as the scapegoat by the same ones who want to keep the status quo going. We can acknowledge that for many things we’re way past fixing them and yet start doing something different to reduce even more damage. Whether it will matter in the end is irrelevant, doomers and optimists can still agree we need to change, and quickly.
I agree it can be used fallaciously, often found in the business world. My point was to include both good and bad honestly and not hide it, and people won’t shut down if they get the good first. It also depends on the subject - if they’re on the right track and your suggestion leads to better results, that’s not as negative as telling someone they’re doing something incorrectly and offering a different way.
In the end, how you say things is just as important as what is said.
On constructive criticism - definitely rule one is make sure that it’s invited first, but second, the best way to “sweeten” a critique and make it more appealing is to put it between compliments. Don’t have a bare remark about the problems or suggestions, tell them what you like first, then how they might change things, and then close with something else positive or simply thanking them for sharing it. Even if someone says they want to hear what people think, it’s normal to be defensive, so help lower that reaction first, and then leave them feeling appreciated even though you pointed out issues you saw.
That is not much. That’s less than their increase of net earnings from last year.
Current LLMs would end that sketch soon, agreeing with everything the client wants. Granted, they wouldn’t be able to produce, but as far as the expert narrowing down the issues of the request, ChatGPT would be all excited about making it happen.
The hardest thing to do with an LLM is to get it to disagree with you. Even with a system prompt. The training deep down to make the user happy with the results is too embedded to undo.