writing.exchange is one of the many independent Mastodon servers you can use to participate in the fediverse.
A small, intentional community for poets, authors, and every kind of writer.

Administered by:

Server stats:

330
active users

#colocation

0 posts0 participants0 posts today

:boosts_ok_gay: Recherche de coloc à Lyon

Je partage l'annonce d'une copine sur Lyon depuis peu :

Hi ! Je cherche une chambre en coloc troubles psy-safe, queer-safe etc sur Lyon ou Villeurbanne. J'aimerais pouvoir partager des vrais moments entre colocs, pas juste un appartement et un loyer. Budget 400-500€

[Edit : Bon, la personne trouvée s'étant finalement désistée, on relance l'annonce !]

Hello El Fédiverse ! :shibacool:

Ma coloc partant pour d'autres aventures, nous recherchons (quelque peu urgemment... 🚨) une nouvelle personne au sein de notre #colocation sur #Lyon ! 🏠

L'annonce est visible icitte : lcrt.fr/cry91l

Si vous avez la moindre question, n'hésitez point ! :shibasmile:

Le repouet, c'est chouette ! 🦉

Continued thread

For many years, my business was also a happy customer of PriorityColo colocation services in Toronto. If you need that kind of dedicated or colocated data centre service, you will appreciate their professionalism. They get a strong thumbs up from me.

prioritycolo.com/

2/3

Priority Colo Inc.PriorityColoWe offer cost effective Colocation, Dedicated Servers, and related services utilizing our own highly redundant infrastructure, personnel, and procedures developed over our many years of experience

Bonjour 🌻

Une place se libère dans la coloc d'une amie vers fin avril.

Voilà le lien de l’annonce complète : lcrt.fr/f0ydq0

"Pour résumer : on est deux, trans et végé, et on cherche à préserver une coloc qui soit collective, où on partage plus qu’un simple loyer 🙂
Bail solidaire, meublé, proche de Saxe-Gambetta, ~460€ toutes charges comprises. À bientôt ❤️"

Partages appréciés !

[Edit : Personne trouvée (normalement), merci pour vos repouets ! 😁]

Hello El Fédiverse ! :shibacool:

Ma coloc partant pour d'autres aventures, nous recherchons une nouvelle personne au sein de notre #colocation sur #Lyon ! 🏠

L'annonce est visible icitte : lcrt.fr/cry91l

Si vous avez la moindre question, n'hésitez point ! :shibasmile:

Le repouet, c'est chouette ! 🦉

ENG
Hi everyone,
I hope you're doing all well!
Great news: I have found my internships! It will be in #brussels in #belgium !
I am now searching for a female #flatshare , between March 29th and June 4th(2 months approximatively)
Reposting is helpful <3
Thanks to everyone btw
People from Brussels, see you soon <33

FR
Salut tout le monde,
J'espère que vous allez bien,
Super nouvelle : j'ai trouvé mes stages à #bruxelles en #belgique
Je suis actuellement à la recherche d'une #colocation féminine, entre le 29 mars et le 4 juin(2 mois environ)
Merci à toustes,
Les personnes de Bruxelles, on se voit bientôt <33

Continued thread

"Concepts of a plan..."

Starting to refine the short list for moving hosting [ moose.ca ], [ fxide.io ], and [ fxworks.io ] and others. I'm at the point where architectural considerations need to take centre stage.

Here's my current short-list (random order):

* lunanode.com
* xenyth.net
* servarica.com
* ca.ovh.com

If anyone has direct experience with these providers I'd love to hear about it! Feel free to DM instead of public mention if so inclined.

If anyone knows of any other providers I should be considering please speak up! Preference is for Canadian hosting companies. If they have a presence overseas even better!

OVH is not a Canadian company but keeping them in the running for now.

My use case is moving ionos.com and linode.com services to other providers in Canada.

Also, I need to decide if I can be bothered running email again. There are Toronto based email services available at reasonable prices. Needs more pondering.

Thx!

moose.camoose.ca
#homelab#soho#Linux

Friends...

#Colocation and/or #VPS #options in eastern #Ontario #Canada run by a #Canadian company?

Preferably centred around or near #Ottawa for coloco. VPS less concerned on location other than #Toronto/ #Ottawa/ #Montreal backbone areas.

Any options? Web searching seems to pull up enterprise grade ($$$) or mickey mouse stuff or US companies with Toronto PoPs.

For frame of reference I currently have stuff on Linode/Akamai and Ionos.

Alternatively, if anyone is operating or interested in starting a coloco cooperative in the #Almonte/ #Renfrew/ #Arnprior/ #CarletonPlace/ #Perth areas please reach out. Maybe there is critical mass to build or I can help something already operating...

#homelab#soho#Linux
Replied in thread

Next is allsimple.net.

Reasonably nice page.  A bit informative.  What really sticks out for me is that this is someone still doing #colocation (but it’s in London, which is rather too many hundreds of miles away).  I could really use colocation, in #Barvas maybe. 😉  I have a rack server ready to go but its main use at the moment is to drown out the noise of television or make airplane soundtracks.

...

Salut,
Si vous cherchez un logement, une place dans une #colocation est disponible de janvier à juin à #quebec proche université, quartier st Sacrement. Un jeune étudiant de 20 ans et un chat sont déjà présent.
Parfait pour un.e etudiant.e qui vient faire une session à Québec.
Tarif préférentiel pour le gens de Mastodon parce qu'ils sont sympa, globalement :)
Plus d'infos en privé. Repouet apprécié !

I'd like to share an update about my Dell R710.
At least some joy in this otherwise awful week.

Where were we? Ah, yes. The R710 was almost ready for colocation in my garage, and I still needed to install the two PCI 2xNVME adapters.

So, first, good news: The R710 finally found a home in my new colocation in Milan, Italy. At least one of us has something that can be called home.

The picture shows the final installation of the R710.
Unfortunately, the Dell rails I had did not fit in the new rack, so I had to use a couple that were ... ahem ... borrowed from the colo provider. Apparently, the rack is somehow longer than the usual ones. From what I saw in the other racks, I'm not the only one having that problem.

I installed the two PCI NVME adapters. However, I had second thoughts about L2ARC. From what I've read and my usage, I would not benefit
from an L2ARC, and perhaps it's better to have a good-tuned memory ARC. I also considered using either a write cache or a special (mirrored) device
for ZFS metadata. But given that the spinning disks are mostly for archive/disaster recovery, I don't think I will benefit from them either.

Instead, I installed two 1TB NVMEs in different cards configured as ZFS mirror to hold virtual machines and jails. I believe that's plenty of space for what I need, but I left the two remaining slots to expand that pool or for future use.

I bought a Schuko PDU and a shelf to hold the router and the switch. The former is the only new equipment in the rack. Everything else
is something I previously had in my garage.

The router is the exact copy of the router I have in another location, with 4x2.5G ethernet ports running plain FreeBSD, automated through Ansible, and terminating the VPNs. In future, I would like to
experiment with VXLANs and BGP... and maybe E2VPN, when support on FreeBSD will be available.

It's not much, but it's a start. I need to start somehow, right? I hope to have made my hardware fairy proud.🧚‍♀️

So, what's next?

I have a twin R710 that I'm refurbishing. That will mostly hold virtual machines for community projects and have different disk layouts.
I also plan to buy a decent switch. I want something rack-mountable, managed with at least 16 ports and 2.5G+ speed (with 2x10G)
In future, I also want to build a machine for a private LLM, as I have a specific use case for that,
but that's definitely not happening before next year.

I'm bored Tara, and I'm going to share a very fresh update about my Dell R710, even if perhaps nobody gives a sh*t about it. 👩‍💻

I left it with "Will it boot?".

It booted but had 12 failed DIMMs. Luckily, I had a twin R710 from which I took the DIMMs and it has 144GB of RAM again. (new spare DIMMs were ordered and arrived today).

I tried to boot from *one* SSD. I decided not to trust the PERC H700 that I configured in RAID0. As a matter of fact, I was right and I remember correctly that those raid controllers put a configuration on the first few bytes of the disk, which corrupted the GPT table. (managed to restore the table afterwards)

I impulsively purchased on ebay an LSI 9261-8i controller, hoping that the card would support passthrough other than RAID. I was wrong. And my "hardware fairy" 🧚‍♀️ was right once again. So, I purchased a pre-modded 9211-8i in IT mode. That arrived this week, and today, I tried it out. It worked as a charm with the boot SSDs, so I took the courage and put all the 4x12TB HDDs in the trays.

The result is the one you see in the picture. I was able to "shift and lift" the system from the HP N40L to the Dell R710. Of course, it runs FreeBSD.

The system is almost ready for colocation. I am planning to install 2 x (2 x M.2 NVME) PCI adapters, so I can have a redundant L2ARC cache and a dedicated fast storage pool for VMs and Jails.

#freebsd#zfs#dell

#Stories from the #trenches...

Ran a
#Debian apt dist-upgrade on my #Proxmox server before moving it from a shared #network cabinet at the #colocation facility in #LA to my personal cabinet, thinking that when I booted it up it'll be up to date from the last reboot months ago. When I discovered the LACP bundle wasn't coming up for it, I found it sitting at the infamous (initramfs) prompt. Apparently #Linux #Kernel 6.8 removed support for #LSI SAS1068E cards, or at least was very unhappy with it. At the moment I am stuck with kernel 6.5 until I either blast off this #Supermicro X9 server entirely, or get an LSI 9311-8i + SAS cables.

For the cost of upgrading the CPUs, quadrupling the RAM, getting that LSI card + cables, and swapping the 2x10G NIC for a 2x40G NIC at $420, I could get an X10 box on
#eBay with almost the same specs, and if I double that to $1000 I could aim for an X11-based box. Decisions, decisions, although the most important one for now is to put off such unnecessary purchases until I land my next job.