• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle

  • Separately, Jong has also alleged that Apple subjected her to a hostile work environment after a senior member of her team, Blaine Weilert, sexually harassed her. After she complained, Apple investigated and Weilert reportedly admitted to touching her “in a sexually suggestive manner without her consent,” the complaint said. Apple then disciplined Weilert but ultimately would not allow Jong to escape the hostile work environment, requiring that she work with Weilert on different projects. Apple later promoted Weilert.

    As a result of Weilert’s promotion, the complaint said that Apple placed Weilert in a desk “sitting adjacent” to Jong’s in Apple’s offices. Following a request to move her desk, a manager allegedly “questioned” Jong’s “willingness to perform her job and collaborate” with Weilert, advising that she be “professional, respectful, and collaborative,” rather than honoring her request for a non-hostile workplace.


  • Akisamb@programming.devtotumblr@lemmy.worldDots connected
    link
    fedilink
    arrow-up
    12
    arrow-down
    10
    ·
    4 months ago

    You are in a bubble. A neo nazi march was banned two weeks ago in France before being allowed again by the judicial system. The exact same scenario has been repeating for pro-palestine protests.

    At least in France, the scenario seems to be that the government wants to ban any controversial march and is being kept under control by the justice system.


  • I’m afraid that would not be sufficient.

    These instructions are a small part of what makes a model answer like it does. Much more important is the training data. If you want to make a racist model, training it on racist text is sufficient.

    Great care is put in the training data of these models by AI companies, to ensure that their biases are socially acceptable. If you train an LLM on the internet without care, a user will easily be able to prompt them into saying racist text.

    Gab is forced to use this prompt because they’re unable to train a model, but as other comments show it’s pretty weak way to force a bias.

    The ideal solution for transparency would be public sharing of the training data.




  • It’s absolutely amazing, but it is also literally and technologically impossible for that to spontaneously coelesce into reason/logic/sentience.

    This is not true. If you train these models on game of Othello, they’ll keep a state of the world internally and use that to predict the next move played (1). To execute addition and multiplication they are executing an algorithm on which they were not explicitly trained (although the gpt family is surprisingly bad at it, due to a badly designed tokenizer).

    These models are still pretty bad at most reasoning tasks. But training on predicting the next word is a perfectly valid strategy, after all the best way to predict what comes after the “=” in 1432 + 212 = is to do the addition.