• 5 Posts
  • 57 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle
  • […] a public institution is really not a great example of the general population […]

    Which I touched upon in my disclaimer, but in some ways it is a great example. Public institutions are defined by the general population, indirectly through their representatives creating the rules that govern them, and directly through contact with the public at large. Now if all our institutions still use this very outdated technology, and you can have trouble convincing them - during a global pandemic mind you - that using email is just as safe as using fax (so not safe at all basically), then that speaks to a larger mindset in the general population.

    Many in the general public are also a lot quicker, some might even say careless, with adopting new technology of course. But as a society we are rather slow, and there are surprisingly many individuals who are hesitant or entirely resistant to adopting new technology. The fediverse usage is a bubble in a bubble here.

    The internet infrastructure is another good example for this on the societal level, as there were plans in the 1980ies [!] to lay out a glass fibre network between every publicly used building in the country, which would have gotten us a good part of the way towards adopting this new material at scale. But in the end it was deemed unnecessary and too expensive and the project got canned (mixed in with rumours of “close friendship” between the chancellor and a major copper producer). Instead now we have people running around thirty years later and collecting signatures at the door for last-mile fibre network projects that seldom make quorum and thus almost never materialise public funding.


    1. […] But also how are Germans technologically behind regarding common personal life?

    I bet you wherever in Germany you are, if you go to the website of your local city government right now they will have a still active fax number in their contact information. I guarantee it. Well if they have a website that is.

    Which is a bit silly as an example but highlights the central problem, which is that adoption of new technology happens at a glacial pace, especially in public institutions. There are many reasons for that of course, some good, like the aforementioned inclination towards privacy, some bad like whatever allows fax machines to still be around.

    And don’t get me started on internet infrastructure… In an international comparison we certainly aren’t leading the field regarding adoption of new technologies.



  • So, I think it’s pretty stupid to argue whether “convicted felon” should be in his opening lede line for Wikipedia.

    True though that may be, I don’t think it’s surprising that this would happen, and since making the post I have been falling down a rabbit hole of finding out how Wikipedia is handling situations like this, partly through taking more than a glancing look at the talk pages for the first time ever, and it’s fascinating.

    Currently my deepest point of descent is this sub-thread on the Admin board about the “consensus” boxes on top of talk pages being an undocumented and unapproved feature.


  • In Germany, Mein Kampf is banned except for educational purposes, eg in history class.

    Strictly speaking this is incorrect, although the situation is somewhat complicated. There are laws that can be and were used to limit its redistribution (mainly the rule against anti-constitutional propaganda), but there are dissenting judgements saying original prints from before the end of WW2 cannot fall under this, since they are pre-constitutional. One particular reprint from 2018 has been classified as “liable to corrupt the young”, but to my knowledge this only means it cannot be publicly advertised.

    What is interesting though is how distribution and reprinting was prevented historically, which is copyright. As Hitlers legal heir the state of Bavaria held the copyright until it expired in 2015 and simply didn’t grant license to anything except versions with scholarly commentary. But technically since then anybody can print and distribute new copies of the book. If this violates any law will then be determined on a case-by-case basis after the fact.








  • It seems like it would be a bit confusing, though, if you had to relearn times whenever you travel somewhere (edit: and dates could flip over in the middle of a work day). But maybe you’d prefer that.

    I’d prefer that over having to change clocks when you travel, and having to have knowledge about the location and possibly having to flip the date when you encounter a reference to a specific time, yes.

    Before they were invented, it was literally just anarchy. People set it to match people they knew. That’s what I was thinking of, but it could also just be one place where noon is at 12:00 PM.

    Yes, you would obviously do the latter. No sense it going back to the bad old days.

    Well, there’s not a round number of second in a day, or days in a year, for example, since they’re all naturally occurring and arbitrary.

    Days in a year ok (except leap years). But seconds in a day are round (discounting days with leap seconds). 24 * 60 * 60 = 86400, which is divisible by two. Did you mean they are not based on the decimal system? I’d be up for a decimal based time system and a reorganised calendar, but that wasn’t the topic of discussion here.

    And then the Earth turns at a subtly non-constant rate, and people have settled on a seven day week.

    Yeah but none of that has much impact on the timezone debate.

    If you do have timezones, it doesn’t make sense to be inflexible with them when they run up against geography or trade and cultural ties, so they’ll be curvy, and geopolitics will itself change over decades and someone will want to change which one they’re in.

    Fair enough. I acknowledged this point in my other post, that there are historical reasons for timezones mostly rooted in administrative requirements. But I don’t think this is a good reason to not adopt a better system per se.

    All of this is a headache if you just want to do a calendar calculation.

    Exactly! So out with the old, in with the new. Sure this will create some other headaches, especially given how deeply rooted some of the relevant nomenclature is in most languages, but the sooner we change this the less it will hurt. I see that it might be a non-starter given the inertia and disunity of globalised society working against it, but it still seems desirable nonetheless, to me at least.



  • And when it does happen it’s usually clarified. In more automated contexts (e.g. a scheduled YouTube premiere) the software converts it automatically - the author inputs the date and time in their own timezone, and viewer sees the converted date and time in their own timezone.

    My point exactly though, this is a whole lot of complexity we could just get rid of by using a single timezone, with the added benefit of that working without any automation or clarification. Next Tuesday 14:00? Same time for everybody, regardless of locality. Everyone will know what part of the solar day that is for them by habit.

    When it does happen it reminds us that the date and time falls on a different time of day for different participants.

    The complexity of coordinating different solar cycles is there either way and unavoidable. So why not use the simpler system?

    Meet me here tomorrow at 01:00

    Yes, semantic drift in these terms would be unavoidable, but I still see the long-term benefits to clarity outweighing the short-term costs in it.


  • We already have that for technology to use - the unix timestamp.

    A unix timestamp is an offset to a UTC date, not a timezone. But fair enough, there is UTC. It’s not used by default however, except by scientists and programmers maybe.

    Maybe I’m missing something. What do you think the benefits would be?

    Removing ambiguity from casual language. Currently when you state a time you are almost always implying your local timezone applies, which might be unknown information to the recipient, especially with written sources like these comments here. With everybody using the same timezone instead you would always make an unambiguous statement about the specific time by default.


  • Oh don’t get me wrong, I see how it makes sense. I’m just saying that 1) it is arbitrary nonetheless and 2) it doesn’t outweigh the benefits that could be gained by using a single global timezone. Incidence angle of solar radiation is hardly something most people need or even want to track beyond a certain degree (dawn, noon, dusk, midnight), and the times that would coincide with at your latitude and longitude can be easily learned.


  • The fact that you give a preference to change something here which you give as an example for something that shouldn’t be changed because it would be problematic is deeply ironic to me.

    Also, again, I don’t really see the problem with changing the date in the middle of the day. It’s virtually the same as changing it at 00:00 or 04:00, you change the date once every 24 hours. Right now you have a situation where one persons 3rd of the month could be another persons 2nd or 4th, depending on where on the globe they are. That’s not really ideal either, especially for that call scheduling example by the GP.


  • Cool, so sunrise is at 8 PM now.

    And the problem with that is… ?

    Or maybe there’s just no consistent relationship between what a clock on the East and West coast of America say, and a call can’t be scheduled between them.

    If you get rid of timezones they all say the same time, no? If you want to schedule a call you just say the time and save the timzone offset fiddling.

    The real problem with time and date is that it has to fit social and natural systems as well as actual passage of time.

    Can you give any more concrete examples? None come to mind beyond habit, which is not an immutable thing.



  • If you rise anywhere above lever 5 or so, the difficulty ratchets up so much it makes the main quest nearly impossible to complete.

    Didn’t Oblivion already have the difficulty slider? You could just adjust that, no?

    I know level scaling is a big topic in the industry, but for me, the way it’s implemented nearly ruins what is otherwise a mostly great game.

    Two of the first RPGs I played were Gothic and Gothic II which released approximately alongside Morrowind and Oblivion, and they just had no dynamic level scaling at all, so I don’t really see the appeal either. A tiny Mole Rat being roughly the same challenge as a big bad Orc just breaks immersion. If you were to meet the latter in early game it would just curb stomp you, which provided an immersive way of gating content and a real sense of achievement when you came back later with better armour and weapons to finally defeat the enemy who gave you so many problems earlier. Basically the same experience you had with Death Claws in Fallout New Vegas when compared to Fallout 3 - they aren’t just a set piece, they are a real challenge.

    The games had their own problems, for example the fighting system sucked, and I’m told the English translation was so bad the games just flopped in the Anglosphere, putting them squarely in the Eurojank category of games. But creating a real sense of progression and an immersive world were certainly not amongst their weaknesses.


  • a neural network with a series of layers (W in this case would be a single layer)

    I understood this differently. W is a whole model, not a single layer of a model. W is a layer of the Transformer architecture, not of a model. So it is a single feed forward or attention model, which is a layer in the Transformer. As the paper says, a LoRA:

    injects trainable rank decomposition matrices into each layer of the Transformer architecture

    It basically learns shifting the output of each Transformer layer. But the original Transformer stays intact, which is the whole point, as it lets you quickly train a LoRA when you need this extra bias, and you can switch to another for a different task easily, without re-training your Transformer. So if the source of the bias you want to get rid off is already in these original models in the Transformer, you are just fighting fire with fire.

    Which is a good approach for specific situations, but not for general ones. In the context of OP you would need one LoRA for fighting it sexualising Asian women, then you would need another one for the next bias you find, and before you know it you have hundreds and your output quality has degraded irrecoverably.


  • Yeah but that’s my point, right?

    That

    1. you do not “replace data until your desired objective”.
    2. the original model stays intact (the W in the picture you embedded).

    Meaning that when you change or remove the LoRA (A and B), the same types of biases will just resurface from the original model (W). Hence “less biased” W being the preferable solution, where possible.

    Don’t get me wrong, LoRAs seem quite interesting, they just don’t seem like a good general approach to fighting model bias.


  • First, there is no thing as a “de-biased” training set, only sets with whatever target series of biases you define for them to reflect.

    Yes, I obviously meant “de-biased” by definition of whoever makes the set. Didn’t think it worth mentioning, as it seems self evident. But again, in concrete terms regarding the OP this just means not having your dataset skewed towards sexualised depictions of certain groups.

    1. either you replace data until your desired objective, which will reduce the model’s quality for any of the alternatives

    […]
    For reference, LoRAs are a sledgehammer approach to apply the first way.

    The paper introducing LoRA seems to disagree (emphasis mine):

    We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks.

    There is no data replaced, the model is not changed at all. In fact if I’m not misunderstanding it adds an additional neural network on top of the pre-trained one, i.e. it’s adding data instead of replacing any. Fighting bias with bias if you will.

    And I think this is relevant to a discussion of all models, as reproduction of training set biases is something common to all neural networks.