What is almost certainly happening with Tumblr’s image classifier is that they’ve deployed the bot entirely untrained, and are relying on user complaints to train it what NSFW content on tumblr looks like.
Making people participate in their own censorship in this way and punishing them with the threat of being silenced even if in compliance with the rules should they not participate, is, IMHO, abuse of their userbase.
Source: I’ve worked for three different top-five social networks, and at all of them this type of work would have been done by making the userbase do it as much as possible.
Please signal boost. I don’t care if you reblog or repost, just get the word out so that people know what they’re doing as they interect with this thing.
My twitter is @soren_tycho (nsfw, sfw account coming soon) and my github username is sorentycho.
Just chiming in as a machine learning expert, this is almost certainly the case (and knowing tumblr, they weren’t even smart enough to throw in pre-trained weights)