Yeah, not everything in science is super cool, but it is valuable to show why testing things out matters. Although it’s not like the scientific community has been great about doing peer reviews anyways.
Yeah, not everything in science is super cool, but it is valuable to show why testing things out matters. Although it’s not like the scientific community has been great about doing peer reviews anyways.
That’s why I did “watching paint dry” as an actual science fair experiment. Tried putting paint in different environments to see how conditions actually effected the speed in which it dried.
Requires experimentation to backup a hypothesis with empirical data. Yeah it sounds boring, but had some fun with it regarding the different “environments” (like under heat lamp, with a fan, etc.)
Yeah another way to interpret it is “oh damn were doing something wrong and need better leadership”.
Not saying that’s actually the case, but would better align with the graphic.
It can emulate the switch, and I’ve tried it out with pretty good success, but not sure if there are any tradeoffs.
This approach has been around for a while and there are a number of applications/systems that were using the approach. The thing is that it’s not a different model, it’s just a different use case.
Its the same way OpenAI handle math, they recognize it’s asking for a math solution and actually have it produce a python solution and run it. You can’t integrate it into the model because they’re engineering solutions to make up for the models limitations.
Yeah, I was interested in the idea cause I have a saturn which is a bit beaten up, but if I can’t play the disc’s I have why would I bother.
I mean I can list a lot of things AI (and I’ll limit it to Transformers, the advancement that drives LLMs) has enabled:
AI isn’t a scam, but it’s being oversold and it’s limitations are being purposefully hidden. That being said, it is changing how things are done and that’s not going to stop. We’re still seeing impacts from CNNs, one of the major AI/ML breakthroughs from over a decade ago, make impacts.
What was so obvious in that instance was the board members trying to push him out were calling out the lack of openness OpenAI was trending towards. They were literally calling him out for not upholding the vision of why the company was founded.
All the engineers clearly saw their payday slipping away and revolted for that reason. Can’t say I blame them, but it was a scenario where the board was actually doing the right thing and everyone turned on them for profit.
Originally all their work was supposed to be published and shared with the world, hence the “open” in OpenAI. However somewhere along the way they made a for-profit break off of the original company and started pulling everything in that direction.
I think the issue is that it makes it riskier to buy one. They’re expensive one day and then drop in price the next means that the market hasn’t figured out what they should really cost. If you sell/trade-in your cars semi frequently, that’s a big risk.
You’ve got a break in events (going to buy more drugs). However, if you buy someone drugs and they die from them you can be found culpable!
Actually by your definition I think RICO fits really well, RICO is for anyone who participated in the criminal enterprise (so not just the leaders of the organization) and to my understanding it is primarily to trump up charges against them to hold them accountable for the crimes of the organization.
That being said in general we don’t really go after the secondary effects of white collar crime the same way we do with violent crime.
I think RICO is a white collar equivalent to the concept of felony murder, although maybe not a 1-to-1.
You get a harsher penalty by basically being involved/contributing in a large operation.
Based on what I was reading, I think they may have thrown the book at all of them because this was the third incident (possibly murder?) this group was involved in that week.
Also seems to be a lack of understanding that just cause you didn’t pull the trigger doesn’t mean you didn’t help create the scenario where a trigger got pulled.
I’m not sure I agree with all instances of felony murder (like when it’s an accomplice who dies), but the general notion is you participated in the events that lead to this person’s death.
I think the idea of felony murder makes sense, you’ve helped create a scenario where someone ended up murdered. I think it’s ridiculous that one of your accomplices can be that person though.
The real issue with this case is that he is 15 (16?). Obviously he should have been treated as a juvenile (especially since it sounds like they were all kids).
It was sooo frustrating cause they shut down Harris at one point, like stop letting Trump steam roll you and be consistent on who can speak.
I mean an easy way to make these systems is all approvals go through automatically and all rejections require human review. But it usually feels like they just want an excuse to deny claims.
That’s actually why I went with the Xbox this cycle. I got a series x for the large TV and a $200 (on sale) series S for the smaller one (although we usually just use a computer monitor and play side by side on the couch).
All the evolution in AI right now is just trying different model designs and/or data. It’s not one model that is being continuous refined or modified. Each iteration is just a new set of static weights/numbers that defines it’s calculations.
If the models were changing/updating through experience maybe what you’re writing would make sense, but that’s not the state of AI/ML development.