How we can prevent Nostr from ruining our day
Nostr has a problem with explicit content. Let's look at this from the perspective of a typical Nostr user.

Above we can see my "recent" Nostr feed on the Yakihonne web application. This shows notes only from other Nostr users whom I have affirmatively "followed". For this reason, the odds of me encountering an image that ruins my day is actually low, and I don't think I need a service to scan images for me.
Here, on the other hand.....

This shows me navigating to the #asknostr hashtag. Looks fine, right?
Well, I can tell you: A few days ago I was looking at this very same hashtag, freshly awakened following my morning coffee, and I saw a bunch of posts which really ruined my day. Let's just say it wasn't "pornography" that I saw, it was much, much worse. Much worse.
So, what do we do about this?
Various solutions including: relying on community tagging, relying on "web of trust", or maybe even blacklisting certain image domains, have been proposed. But none of these will work at scale. When I go to the #asknostr hashtag, I really don't just want to see posts from my web of trust... I want to see all the posts! I want to see all the posts, except SPAM and images that will ruin my day.
The part where I quixotically attempt to fix this problem
After discussing a bit with FiatJaf, Hodlbod, and others in the community, this is currently the best solution I have come up with. It may be that someone has a better solution, if that's you -- please speak up.
On this page I will use the term "APPLICATION", which could be an intermediary service/api like that served by Yakihonne, Coracle, Primal, or Damus -- or it could actually be a relay -- running strfry or another relay implementation.
We start by the "APPLICATION" (again, could be the API of web or mobile app, could be a relay...) getting some sense of WHICH events should be prioritized for image safety scanning.
One trick: If you know that your users will often be checking popular hashtags like #asknostr or whatever, you can queue up these events to extract image URLS, and do this PERIODICALLY, like every few minutes... so when your user sits down and looks at #asknostr, all the recent events have already been scanned for NSFW images....
In fact -- if you are showing your users lists of events of popular hashtags like #asknostr ... then you need to DE-SPAM those events anyway. I suggest you DE-SPAM them, leaving you with a list of events you MIGHT want to show, and then... extract image URLs and test those image URLs for NSFW.
But: Most basically, you extract the image URLs from events, any way you choose, and get "scores" for images and video. The "score" is just a floating point number between 0 (zero) -- which signifies, "this is safe to show to Grandma".... and 1 (one), which signifies, "please don't show this to Grandma."
As it turns out, I'm not "all talk", and, despite adhering to a rigorous schedule of recreational drug use and winter sports, have now thrown together a pretty high-performance API which does exactly this.
API Docs
Making Requests
We start with a URL that you can use in your browser, like this:
https://nostr-media-alert.com/score?key=demo-api-key-heavily-rate-limited&url=https://rizful-public.s3.us-east-1.amazonaws.com/temp/known-safe-image.jpg
Let's dig in...
Base URL: https://nostr-media-alert.com/score
Query Parameter 1: key
— Your API key. We'll provide one.
Query Parameter 2: url
— The URL of the image or video you want to check.
Responses
We'll provide you with (hopefully) useful info about the URLs you submit. You should get a response to your HTTP request in under 5 seconds (for images) and under 25 seconds (for videos). In many cases, if we've seen the event in the past, you should get a response within about 250ms (1/4 of a second).
Here's an example response:
{
"message": "SUCCESS",
"score": 0.999
}
This response indicates that our machine learning models successfully processed your image or video URL. A score above 0.97, or so, suggests that you might not want to show this to Grandma.
{
"message": "INVALID MEDIA",
"score": null
}
This means we think the link you submitted is not a valid image or video URL. We're doing some magic to really check media files to be sure they are valid. If you submit a link, and you're really sure it's a valid media file, but you get this message, please let us know.
{
"message": "TIMEOUT",
"score": null
}
This means we tried to process your image or video URL, but it took too long, like longer than about 25 seconds. If this is a VIDEO you've sent, sometimes this is normal. You can just check again later... like wait another 20 seconds and try again with the same URL. If this is an IMAGE you sent, you should not see TIMEOUT -- if you see that, please contact us so we can fix it.
{
"message": "RATE LIMITED",
"score": null
}
Your API key is good for a certain number of simultaneous requests. If you see "RATE LIMITED", then you're a big boy, and you're making too many simultaneous requests. To upgrade to more simultaneous requests, please call 1-800-GOOGLE, or, alternatively, think positive thoughts.