Do you have any advice, recommendations, and/or tips for someone wanting to host a public/non-personal Lemmy instance?
from Kalcifer@sh.itjust.works to selfhosted@lemmy.world on 10 Nov 2024 23:47
https://sh.itjust.works/post/27913099

If you think this post would be better suited in a different community, please let me know.


Topics could include (this list is not intending to be exhaustive — if you think something is relevant, then please don’t hesitate to share it):


Cross-posts

1. sh.itjust.works/post/27913098

#selfhosted

threaded - newest

just_another_person@lemmy.world on 11 Nov 2024 00:01 next collapse

It could be costly in a few places, so choose your host wisely:

  • data ingress/egress
  • storage (block and DB)
  • load balancing (if you choose to go that route)

I know that R2 has no charge for ingress/egress.

The block and db costs are technically unbounded, and will never decrease by default.

Kalcifer@sh.itjust.works on 11 Nov 2024 00:04 next collapse

choose your host wisely

Do you have any particular recommendations for a host?

just_another_person@lemmy.world on 11 Nov 2024 01:31 next collapse

I can tell you it will be the most expensive on AWS or Azure. That’s about all I could say without pricing stuff out.

sugar_in_your_tea@sh.itjust.works on 12 Nov 2024 02:41 collapse

I’d look into Hetzner, their pricing is pretty fair and they have some nifty features.

Also check out Vultr, they have block storage and some interesting addons.

That’s where I’d start, but I haven’t needed to host anything like Lemmy.

Kalcifer@sh.itjust.works on 11 Nov 2024 00:06 collapse

  • data ingress/egress
  • storage (block and DB)

Do you have any estimations on the relationship between user count and average data transfer rates, and the average rate of storage increase?

just_another_person@lemmy.world on 11 Nov 2024 01:03 collapse

It would depend completely on how many users you let in and what kind of things they’ll be doing. Some users are super heavy with uploading images, some users aren’t.

I haven’t read the docs in a long time, but perhaps you could restrict image uploading or something. Nothing you can do about unbounded DB growth without expiring content though.

walden@sub.wetshaving.social on 11 Nov 2024 00:20 next collapse

We require applications, and most applications we get are extremely low effort and we don’t approve them. If you have open registrations you’ll be doing a lot of moderation for spam.

Run the software that scans images for CSAM. It’s not perfect but it’s something. If your instance freely hosts whatever without any oversight, word will spread and all of a sudden you’re hosting all sorts of bad stuff. It’s not technically illegal if you don’t know about it, but I personally don’t want anything to do with that.

Kalcifer@sh.itjust.works on 11 Nov 2024 00:27 next collapse

We require applications

Is this functionality built into the Lemmy software?


Addendum (2024-11-11T00:32Z):

Ah, yeah, it looks like it is configurable in the admin panel [1].

References

1. Lemmy Documentation. join-lemmy.org. Accessed: 2024-11-11T00:35Z. join-lemmy.org/docs/…/01-getting-started.html#reg…. - “2. Getting Started”. §“Registration”. > Question/Answer: Instance admins can set an arbitrary question which needs to be answered in order to create an account. This is often used to prevent spam bots from signing up. After submitting the form, you will need to wait for some time until the answer is approved manually before you can login.

[deleted] on 11 Nov 2024 00:28 next collapse

.

walden@sub.wetshaving.social on 11 Nov 2024 00:36 collapse

Yeah, it’s just something like “Tell us why you want to join this instance”. If the answer is “to promote my content” or “qq”, for example, they don’t get approved.

It’s done by the Lemmy software.

Kalcifer@sh.itjust.works on 11 Nov 2024 00:28 next collapse

If you have open registrations you’ll be doing a lot of moderation for spam.

Perhaps Captchas are sufficient?

walden@sub.wetshaving.social on 11 Nov 2024 03:02 next collapse

I just checked and we have that turned on, too.

We don’t get a lot of applications. A couple per week, maybe.

Dave@lemmy.nz on 11 Nov 2024 04:22 collapse

The spam is not from bots, it’s people being paid to spam. Captchas absolutely need to be turned on or else you get bots as well, but they don’t stop the spam.

Kalcifer@sh.itjust.works on 11 Nov 2024 05:40 collapse

The spam is not from bots, it’s people being paid to spam.

Do you know any specific/official organizations that do this, and/or examples where it’s occured on Lemmy?

Dave@lemmy.nz on 11 Nov 2024 06:03 collapse

Its pretty random outside the Russian misinformation sites (which I haven’t seen in a while, but they probably got better at hiding).

Its hard to give you a link because mods or admins remove the posts or ban the accounts pretty quick most of the time. But there is a new spam account at least every day (I can think of at least two today. Edit: 4). They come in waves so sometimes there are a whole bunch.

That’s probably another thing you need to know. I’m on Lemmy.nz, you’re on sh.it.works. If some new spam account signs up on Lemmy.world and posts to lemm.ee, then if it’s removed by an admin on your instance it is only removed for people on your instance. Everyone else still sees it as your instance is not hosting either the community or the user so it can’t federate our anything to deal with it. The lemm.ee instance could remove the post or comment with the spam in a way that federates out to other instances, but can’t ban the user except for on their instance. Only the Lemmy.world instance can ban the user in a way that federates out to other instances. This is something you’ll get a better understanding of over time.

Lemmy.world has a lot if help so they don’t have issues, but often the spam will come from obscure instances while the admin is asleep and there is no backup, so every other instance has to remove the spam for their own instance. Then you have to work out how to mitigate that for your own instance when you are asleep. Most admins are pretty understanding that this is a hobby and don’t expect everyone to be immediately available, but if you have open registrations then you are likely to be targeted more and need a better plan.

Kalcifer@sh.itjust.works on 11 Nov 2024 07:18 collapse

If some new spam account signs up on Lemmy.world and posts to lemm.ee, then if it’s removed by an admin on your instance it is only removed for people on your instance. Everyone else still sees it as your instance is not hosting either the community or the user so it can’t federate our anything to deal with it. The lemm.ee instance could remove the post or comment with the spam in a way that federates out to other instances, but can’t ban the user except for on their instance. Only the Lemmy.world instance can ban the user in a way that federates out to other instances.

This make me think that we should maintain a community curated blocklist in, for example, a Git repository. It could be a list of usernames, and/or a list of instances that are known to be spam that gets updated as new accounts and instances are discovered. Then any instance owner can simply pull the most current version of the blocklist (this could even be done automatically). Once the originating instance blocks the malicious account, they can be removed from the list. This also gives those who have been blocked a centralized method to appeal the block (eg open an issue to create an appeal).

I would honestly have expected something like this to already exist. I think it’s partly the purpose of Fediseer, but I’m not completely sure.

Dave@lemmy.nz on 11 Nov 2024 07:48 collapse

This make me think that we should maintain a community curated blocklist in, for example, a Git repository.

There would be a few problems I can think of with this approach. The first one is who controls it? Whoever that is, you haven’t solved the issue because now instead of only the instance with the user being able to federate the ban now only the maintainer of the git repo can update the ban list.

If you have many people able to update the repo, then the issue becomes a question of how do you trust all these people to never, ever, ever get it wrong? If you ban a user and opt to remove all their content (which you should, with spam), then if you are automating this you end up with the issue of if anyone screws up then how do you get someone’s account unbanned on all those instances? How do you get all their content restored, which is a separate thing and Lemmy currently provides no good way to do this. How do you ensure there are no malicious people with control of the repo but also have enough instances involved to make it worthwhile?

There is a chat room where instance admins share details of spam accounts, and it’s about the best we have for Lemmy at the moment (it works quite well, really, because everyone can be instantly notified but also make their own decisions about who to ban or if something is spam or allowed on their instance - because it’s pretty common that things are not black and white).

I would honestly have expected something like this to already exist. I think it’s partly the purpose of Fediseer, but I’m not completely sure.

Fediseer has a similar purpose but it’s a little different. So far we have been talking about spam accounts set up on various instances, and the time it takes for those mods and admins to remove the spam. But what happens if instead of someone setting up a spam account on an existing instance, they instead create their own instance purely for spamming other instances?

Fediseer provides a web of trust. An instance receives a guarantee from another instance. That instance then guarantees another instance. It creates a web of trust starting from some known good instances. Then if you wish you can choose to have your lemmy instance only federate with instances that have been guaranteed by another instance. Spam instances can’t guarantee each other, because they need an instance that is already part of the web to guarantee them, and instances won’t do that because they risk their own place in the web if they falsely guarantee another instances (say, if one instance keeps guaranteeing new instances that turn out to be spam, they will quickly lose their own guarantee).

Fediseer actually goes further than this, allowing instances to endorse or censure other instances and you can set up your instance to only federate with instances that haven’t been censured or defederate from instances that others have censured for specific reasons (e.g. “hate speech”, “racism”, etc).

It’s quite a cool tool but doesn’t help the original discussion issue of spam accounts being set up on legitimate instances.

Kalcifer@sh.itjust.works on 12 Nov 2024 01:01 next collapse

The first one is who controls it?

Ideally, nobody. Anyone could make their own blocklist, and one could choose to pull from any of them.

Dave@lemmy.nz on 12 Nov 2024 01:58 collapse

I would like functionality similar to this. One problem with a big list is that different instances have different ideas over what is acceptable. I’d love to “subscribe” to, say, Lemmy.world’s bans and then anyone they ban would get banned on my instance as well. Of course this makes a bigger mess to clean up when someone gets banned by mistake.

Kalcifer@sh.itjust.works on 12 Nov 2024 03:06 collapse

One problem with a big list is that different instances have different ideas over what is acceptable.

Yeah, that would be where being able to choose from any number of lists, or to freely create one comes in handy.

Kalcifer@sh.itjust.works on 12 Nov 2024 01:04 next collapse

how do you trust all these people to never, ever, ever get it wrong?

The naively simple idea was that the banned user could open an appeal to get their name removed from the blocklist. Also, keep in mind that the community’s trust in the blocklist is predicated on the blocklist being accurate.

Kalcifer@sh.itjust.works on 12 Nov 2024 01:07 next collapse

If you ban a user […], then if you are automating this you end up with the issue of if anyone screws up then how do you get someone’s account unbanned on all those instances?

The idea would be that if they are automatically banned, then the removal of the user from the list would then cause them to be automatically unbanned. That being said, you did also state:

If you ban a user and opt to remove all their content (which you should, with spam)

How do you get all their content restored

To which I say that I hadn’t considered that the content would be deleted 😜. I was assuming that the user would only be blocked, but their content would still be physically on the server — it would just be effectively invisible.

Dave@lemmy.nz on 12 Nov 2024 01:56 collapse

Technically it is still there. However, when a user is banned, you can also choose to remove their content. You could choose not to, but then what’s the point in automatically banning a spam account if you have to go and remove the spam posts yourself.

If you choose to remove them all, and you accidentally hit a real user, you’ll remove all their posts and comments. Lemmy doesn’t provide an easy way to restore the content. And although there are automated solutions, you come to the next problem of knowing which posts to restore. Many posts were removed by mods of communities, many were removed by the user themselves. You don’t want to restore those items, instead you need to remember which you removed and restore only those ones - this is different functionality to Lemmy’s option to remove all their content.

This actually exists in some form, there is an AutoMod that keeps a log of removed content for banned users and allows a restore of that content. So it’s a solved problem, just would need a similar solution to be built for a ban list.

One thing you’ll learn quickly is that Lemmy is version 0 for a reason.

Kalcifer@sh.itjust.works on 12 Nov 2024 03:09 collapse

One thing you’ll learn quickly is that Lemmy is version 0 for a reason.

Fair warning 😆

Kalcifer@sh.itjust.works on 12 Nov 2024 01:10 next collapse

There is a chat room where instance admins share details of spam accounts, and it’s about the best we have for Lemmy at the moment (it works quite well, really, because everyone can be instantly notified but also make their own decisions about who to ban or if something is spam or allowed on their instance - because it’s pretty common that things are not black and white).

Yeah I think I’m more on the side of this, now. The chat is a decent, and workable solution. It’s definitely a lot more hands-on/manual, but I think it’s a solid middle ground solution, for the time being.

Kalcifer@sh.itjust.works on 12 Nov 2024 01:11 collapse

Fediseer provides a web of trust. An instance receives a guarantee from another instance. That instance then guarantees another instance. It creates a web of trust starting from some known good instances. Then if you wish you can choose to have your lemmy instance only federate with instances that have been guaranteed by another instance. Spam instances can’t guarantee each other, because they need an instance that is already part of the web to guarantee them, and instances won’t do that because they risk their own place in the web if they falsely guarantee another instances (say, if one instance keeps guaranteeing new instances that turn out to be spam, they will quickly lose their own guarantee).

How would one get a new instance approved by Fediseer?

Dave@lemmy.nz on 12 Nov 2024 02:04 collapse

First, don’t stress over it. Most instances are not strict on only federating with guaranteed instances. Most do not auto-sync with Fediseer at all, and the ones that do are more likely to only be syncing censures (when other instances are reporting the instance as problematic).

To get guaranteed on Fediseer, you need another instance to guarantee you. If you start your instance, hang out in the spam defense chat, and are generally sensible with your instance, then you’ll find someone willing to do it no problem. Guarantees are not a huge risk to an instance since they can also be revoked at any time. If someone guarantees you then you start being a dick, they can just remove your guarantee. So it’s not a big decision, people wil be happy to guarantee someone who seems reasonable.

Kalcifer@sh.itjust.works on 11 Nov 2024 00:29 next collapse

Run the software that scans images for CSAM.

Which software is that?

walden@sub.wetshaving.social on 11 Nov 2024 00:37 collapse

It’s called Lemmy-Safety of Fedi-Safety depending on where you look.

One thing to note, I wasn’t able to get it running on a VPS because it requires some sort of GPU.

Kalcifer@sh.itjust.works on 11 Nov 2024 00:46 next collapse

One thing to note, I wasn’t able to get it running on a VPS because it requires some sort of GPU.

This is good to know. I know that you can get a VPS with a GPU, but they’re usually rather pricey. I wonder if there’s one where the GPU’s are shared, and you only get billed by how much the GPU is used. So if there is an image upload, the GPU would kick on to check it, you get billed for that GPU time, then it turns off and waits for the next image upload.

pe1uca@lemmy.pe1uca.dev on 11 Nov 2024 02:35 next collapse

I don’t think there are services like that, since usually this means deploying and destructing an instance, which takes a few minutes (if you just turn off the instance you still get billed).
Probably the best option would be to have a snapshot, which costs way less than the actual instance, and create from it each day or so yo run on the images since it was last destroyed.

This is kind of what I do with my media collection, I process it on my main machine with a GPU, and then just serve it from a low-power one with Jellyfin.

Kalcifer@sh.itjust.works on 12 Nov 2024 01:19 next collapse

Probably the best option would be to have a snapshot

Could you point me towards some documentation so that I can look into exactly what you mean by this? I’m not sure I understand the exact procedure that you are describing.

Kalcifer@sh.itjust.works on 12 Nov 2024 01:28 collapse

create from it each day or so yo run on the images since it was last destroyed.

Unfortunately, for this usecase, the GPU needs to be accessible in real time; there is a 10 second window when an image is posted for it to be processed [1].

References

1. “I just developed and deployed the first real-time protection for lemmy against CSAM!”. @db0@lemmy.dbzer0.com. !div0@lemmy.dbzer0.com. Divisions by zero. Published: 2023-09-20T08:38:09Z. Accessed: 2024-11-12T01:28Z. lemmy.dbzer0.com/post/4500908. - §“For lemmy admins:” > […] > > - fedi-safety must run on a system with GPU. The reason for this is that lemmy provides just a 10-seconds grace period for each upload before it times out the upload regardless of the results. [1] > > […]

db0@lemmy.dbzer0.com on 12 Nov 2024 07:41 collapse

You can actually run it in async model without pictrs safety and just have it scan your newly uploaded images directly from storage. It just doesn’t prevent upload this way, just deletes them.

Kalcifer@sh.itjust.works on 15 Nov 2024 03:32 collapse

You’re referring to using only fedi-safety instead of pictrs-safety, as was mentioned in §“For other fediverse software admins”, here, right?

db0@lemmy.dbzer0.com on 15 Nov 2024 10:32 collapse

yep

db0@lemmy.dbzer0.com on 11 Nov 2024 07:11 collapse

The software is setup in such a way that you can run it on your pc if you have a local gpu. It only needs like 2 gb vram

Kalcifer@sh.itjust.works on 11 Nov 2024 07:27 collapse

That is a cool feature, but that would mean that all of the web traffic would get returned to my local network (assuming that the server is set up on a remote VPS), which I really don’t want to have happen. There is also the added downtime potential cause by the added point of failure of the GPU being hosted in a much more volatile environment (ie not, for example, a tier 3 data center).

db0@lemmy.dbzer0.com on 11 Nov 2024 08:09 collapse

Not all web traffic, just the images to check. With any decent bandwidth, it shouldn’t be an issue for most. It also setup in such a way as to not cause a downtime if the checker goes down.

Kalcifer@sh.itjust.works on 12 Nov 2024 00:09 next collapse

Not all web traffic, just the images to check.

Ah, yeah, my bad this was a lack of clarity on my part; I meant all image traffic.

Kalcifer@sh.itjust.works on 12 Nov 2024 00:10 next collapse

With any decent bandwidth, it shouldn’t be an issue for most.

It’s not only the bandwidth; I just fundamentally don’t relish the idea of public traffic being directed to my local network.

db0@lemmy.dbzer0.com on 12 Nov 2024 07:33 collapse

You don’t get public traffic redirected. It’s not how it works

Kalcifer@sh.itjust.works on 15 Nov 2024 04:25 collapse

Yeah, that was poor wording on my part — what I mean to say is that there would be unvetted data flowing into my local network and being processed on a local machine. It may be overparanoia, but that feels like a privacy risk.

db0@lemmy.dbzer0.com on 15 Nov 2024 10:34 collapse

I don’t see how it’s a privacy risk since you’re not exposing your IP or anything. Likewise the images are already uploaded to your servers, so there’s no extra privacy risk for the uploader.

Kalcifer@sh.itjust.works on 17 Nov 2024 02:47 collapse

“Security risk” is probably a better term. That being said, a security risk can also infer a privacy risk.

db0@lemmy.dbzer0.com on 17 Nov 2024 10:37 collapse

Why would it be a security risk?

Kalcifer@sh.itjust.works on 18 Nov 2024 06:20 collapse

For clarity, I’m not claiming that it would, with any degree of certainty, lead to incurred damage, but the ability to upload unvetted content carries some degree of risk. For there to be no risk, fedi-safety/pictrs-safety would have to be guaranteed to be absolutely 100% free of any possible exploit, as well as the underlying OS (and maybe even the underlying hardware), which seems like an impossible claim to make, but perhaps I’m missing something important.

db0@lemmy.dbzer0.com on 18 Nov 2024 06:46 collapse

You mean an exploit payload embedded in an image, and pwning a system parsing that image through python PIL? While there’s never a 100% chance of anything, you’re more likely to be struck by lightning than this coming to pass and at that point you’re at more security risk at using the internet altogether.

Kalcifer@sh.itjust.works on 21 Nov 2024 23:53 collapse

I will preface by saying that I am not casting doubt on your claim, I’m simply curious: What is the rationale behind why it would be so unlikely for such an exploit to occur? What rationale causes you to be so confident?

db0@lemmy.dbzer0.com on 22 Nov 2024 10:07 collapse

Image processing libraries are used at the forefront of almost all web services, including lemmy and are extraordinarily robust. I really don’t have the time to go at this in depth, but if you are familiar with this stuff you will know how extraordinary such an exploit would be and its existence would be causing massive chaos all over the world.

Kalcifer@sh.itjust.works on 12 Nov 2024 00:11 collapse

It also setup in such a way as to not cause a downtime if the checker goes down.

Oh? Would the fallback be that it simply doesn’t do a check? Or perhaps it could disable image uploads if the checker is down? Something else? Presumably, this would be configurable.

db0@lemmy.dbzer0.com on 12 Nov 2024 07:32 collapse

It stops doing checks. Iirc you can configure it yes

db0@lemmy.dbzer0.com on 11 Nov 2024 12:14 collapse

github.com/db0/fedi-safety and the companion app github.com/db0/pictrs-safety which can be installed as part of your lemmy deployment in the docker-compose (or with a var in your ansible)

Kalcifer@sh.itjust.works on 11 Nov 2024 00:31 next collapse

If your instance freely hosts whatever without any oversight, word will spread and all of a sudden you’re hosting all sorts of bad stuff. It’s not technically illegal if you don’t know about it, but I personally don’t want anything to do with that.

Yeah, this is my primary concern. I’m hoping that there are established best practices for handling the majority of this sort of unwanted content.

catloaf@lemm.ee on 11 Nov 2024 01:41 next collapse

I would just turn off media uploads entirely. It’s not worth the risk or disk space.

Kalcifer@sh.itjust.works on 11 Nov 2024 02:00 collapse

I would just turn off media uploads entirely.

Do you mean also disabling thumbnails? IIUC, pict-rs handles all thumbnail generation [1]. The reason I point this out is that simply disabling image uploads won’t itself stop the generation of thumbnails [2]. There’s also the question of storing/caching images that come from federated servers.

Referencs

1. Lemmy Documentation. Accessed: 2024-11-11T01:59Z. join-lemmy.org/docs/…/administration.html. - “9. Administration”. §“Lemmy Components”. §“Pict-rs”. > Pict-rs is a service which does image processing. It handles user-uploaded images as well as downloading thumbnails for external images. 2. “I just developed and deployed the first real-time protection for lemmy against CSAM!”. @db0@lemmy.dbzer0.com. Published: 2023-09-20T01:38:09-07:00. Accessed: 2024-11-11T02:16Z. lemmy.dbzer0.com/post/4500908. - ¶1 > […] if the content is a link to an external site, lemmy sill caches the thumbnail and stores it in the local pict-rs […].

Dave@lemmy.nz on 11 Nov 2024 04:28 collapse

I will add that if you have open registrations you will be a target for spam and trolls, and if you don’t take quick action then some other instances are likely to defederate from your instance.

This depends on the instance, some will have a low tolerance and defederate pretty quickly, some instances will defederate temporarily until the spammers or trolls move to a different instance, and some won’t care. But you likely won’t know it’s happened unless you notice you aren’t getting content from that instance anymore.

One other thing is that if you’re going to run an instance and aren’t already on Matrix, make an account. It’s how instance admins tend to keep in contact with each other.

Kalcifer@sh.itjust.works on 12 Nov 2024 01:13 collapse

[…] if you’re going to run an instance and aren’t already on Matrix, make an account. It’s how instance admins tend to keep in contact with each other.

This is good advice.

finitebanjo@lemmy.world on 11 Nov 2024 04:40 collapse

How much server hosting experience do you have? I asked about database preferences over in Self-Hosting once and they basically all said “don’t choose a database ever. Run. Save yourself while there is still time!”

So maybe use a hosting service I guess. Makes you a more difficult target for attacks but also involves your information getting out into the world in direct connection to your instance.

Kalcifer@sh.itjust.works on 11 Nov 2024 06:11 next collapse

How much server hosting experience do you have?

I’ve never hosted a public facing social media service. I have a few years experience hosting a number of my own personal services, but they aren’t at the scale of a public facing Lemmy instance.

finitebanjo@lemmy.world on 11 Nov 2024 18:16 collapse

You should be good as long as you know PostgreSQL

Kalcifer@sh.itjust.works on 11 Nov 2024 23:20 collapse

Aha, well, it depends on what you mean by “know”.

Kalcifer@sh.itjust.works on 11 Nov 2024 06:15 next collapse

I asked about database preferences over in Self-Hosting once and they basically all said "don’t choose a database ever.

I’m not sure I follow what you mean; Lemmy uses PostgreSQL.

Kalcifer@sh.itjust.works on 11 Nov 2024 06:45 collapse

[Using a hosting service] makes you a more difficult target for attacks but also involves your information getting out into the world in direct connection to your instance.

I’m not sure I understand how one’s data would be leaked by the hoster.

finitebanjo@lemmy.world on 11 Nov 2024 18:14 collapse

Same way things get leaked by Equifax, Twitch, US Bank, etc. You’re most responsible with your information by not having unnecessary accounts or transactions.

Also, most hosts have WhoIs and ICANN registrations for Domains, but you still need a domain regardless. And further than that they might allow subpeonas from various companies who request the info.

Kalcifer@sh.itjust.works on 11 Nov 2024 23:29 next collapse

Same way things get leaked by Equifax, Twitch, US Bank, etc. You’re most responsible with your information by not having unnecessary accounts or transactions.

This would be low down on my concern for threat levels. At any rate, the only way to get around this would be to either host it on one’s own hardware on one’s own network, or to somehow anonymously purchase a VPS (I am currently unaware of a trustworthy VPS that allows anonymous hosting. I have heard of BitLaunch, but I don’t know how trustworthy it is — do they have the ability to intercept control of the DO Droplet?).

Addendum (2024-11-11T23:40Z):

I just found Njalla which seems to allow anonymous purchasing of VPSs, but idk how reputable they are.

Kalcifer@sh.itjust.works on 11 Nov 2024 23:31 next collapse

Also, most hosts have WhoIs and ICANN registrations for Domains, but you still need a domain regardless.

I’m not sure exactly what you are referring to. I don’t exactly follow how the VPS provider would have any privileged insight into one’s domain registration.

finitebanjo@lemmy.world on 11 Nov 2024 23:40 collapse

I’m saying if you payed for a service to host your instance remotely. The domain, the site pages, the the database, everything. Then, everything on the domain would be tied to your person and the service providers have a certain power over your instance aside from just turning off your domain. There are more options to not list or to delist from the WhoIs registry for simple domain purchases.

I just have trust issues, you don’t need to mind my crazy ramblings.

Kalcifer@sh.itjust.works on 11 Nov 2024 23:46 next collapse

I’m saying if you payed for a service to host your instance remotely. The domain, the site pages, the the database, everything. Then, everything on the domain would be tied to your person and the service providers have a certain power over your instance aside from just turning off your domain.

Ah, okay, I was under the assumption that the domain was purchased through a separate, independent provider, rather than through the same provider as that of the VPS.

Kalcifer@sh.itjust.works on 11 Nov 2024 23:47 collapse

I just have trust issues, you don’t need to mind my crazy ramblings.

Concerns about privacy and anonymity are perfectly valid. Ideally, I would want my involvement in a venture like this to be completely anonymous, but there are practical limitations (generally limited by how much added complexity/added risk one wants to put up with).

Kalcifer@sh.itjust.works on 11 Nov 2024 23:38 collapse

they might allow subpeonas from various companies who request the info.

“Allow” is an interesting choice of words. A subpoena is legally binding (depending on the jurisdiction). One could circumvent this by purchasing a domain anonymously, but I’m not currently aware of a reputable domain provider that allows anonymous purchasing of domains.

Addendum (2024-11-11T23:38Z):

I just found Njalla which seems to allow anonymous purchasing of domains, but idk how reputable they are.

finitebanjo@lemmy.world on 11 Nov 2024 23:44 collapse

It comes down to the individual company on whether or not to fight requests for user information. A lot of precedent exists for not complying.

Kalcifer@sh.itjust.works on 11 Nov 2024 23:49 next collapse

It comes down to the individual company on whether or not to fight requests for user information.

Wouldn’t this simply be obstruction of justice?

finitebanjo@lemmy.world on 12 Nov 2024 00:17 collapse

Not every court order is a criminal case.

Kalcifer@sh.itjust.works on 12 Nov 2024 00:49 collapse

Sure, but (in the USA) an investigation precedes a criminal case [2], and a court order is part of that. I directly cite, for example, 18 U.S. Code § 1509 - Obstruction of court orders [1]:

Whoever, by threats or force, willfully prevents, obstructs, impedes, or interferes with, or willfully attempts to prevent, obstruct, impede, or interfere with, the due exercise of rights or the performance of duties under any order, judgment, or decree of a court of the United States, shall be fined under this title or imprisoned not more than one year, or both.

References

1. “18 U.S. Code § 1509 - Obstruction of court orders”. Legal Information Institute. Cornell Law School. Accessed: 2024-11-12T00:42Z. www.law.cornell.edu/uscode/text/18/1509. 2. “A Brief Description of the Federal Criminal Justice Process”. FBI. Accessed: 2024-11-12T00:46Z. fbi.gov/…/a-brief-description-of-the-federal-crim…. - §“I. The Pretrial Stage”. §“Investigations, Grand Juries, and Arrests”. ¶1. > If a crime is brought to the attention of federal authorities, whether by a victim of the crime or a witness to it (e.g., a bank robbery), a federal law enforcement agency will undertake an investigation to determine whether a federal offense was committed and, if so, who committed it. […]

Kalcifer@sh.itjust.works on 11 Nov 2024 23:49 collapse

A lot of precedent exists for not complying.

Would you mind citing a case? I’m curious.

finitebanjo@lemmy.world on 12 Nov 2024 00:09 collapse

NY Times vs Njalla

Njalla does comply with some requests, and was forced to shut down some pirate bay instances at one point, though. Ghost is another privacy domain seller.

Theres also a term for companies called “Bulletproof Registrars.” For example, some Malaysian Registrars apparently don’t have an address and cannot actually recieve most subpoenas.

Mostly VPNs, I don’t know too much about similar cases with server hosts or domain sellers.

Kalcifer@sh.itjust.works on 12 Nov 2024 00:38 collapse

NY Times vs Njalla

Do you have an official record of them not complying with an official court-ordered subpoena? I looked into “NYT vs Njalla”, and it seems like it was the NYT making a private request to Njalla under threats of legal action, but no legal action followed [1][2].

References

1. “About those threats”. Blog. Njalla. Published: 2018-01-25. Accessed: 2024-11-12T00:33Z. njal.la/blog/about-those-threats/. 2. “Njalla gives New York Times The Pirate Bay treatment”. Staff Writer. Mybroadband. Published: 2018-01-26. Accessed: 2024-11-12T00:36Z. mybroadband.co.za/…/246265-njalla-gives-new-york-…. - ¶10 > TorrentFreak reported that Njalla did not hear back from the New York Times after sending the response.