writing.exchange is one of the many independent Mastodon servers you can use to participate in the fediverse.
A small, intentional community for poets, authors, and every kind of writer.

Administered by:

Server stats:

326
active users

#imageprocessing

1 post1 participant0 posts today
LavX News<p>Unveiling HDR Steganography: A New Frontier in Image Concealment</p><p>The emergence of HDR steganography introduces a groundbreaking method for hiding images within a single HDR JPEG file, challenging traditional notions of digital content sharing. As this technology ga...</p><p><a href="https://news.lavx.hu/article/unveiling-hdr-steganography-a-new-frontier-in-image-concealment" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.lavx.hu/article/unveiling</span><span class="invisible">-hdr-steganography-a-new-frontier-in-image-concealment</span></a></p><p><a href="https://mastodon.cloud/tags/news" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>news</span></a> <a href="https://mastodon.cloud/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://mastodon.cloud/tags/Cybersecurity" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Cybersecurity</span></a> <a href="https://mastodon.cloud/tags/ImageProcessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ImageProcessing</span></a> <a href="https://mastodon.cloud/tags/HDRSteganography" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HDRSteganography</span></a></p>
hesselschutLatenzraum Hbf <br> <br> A slit-scan video at Amsterdam Centraal railway station, frame by frame interpretaties with CLIP and then fed as a prompt to Stable Diffusion, interpolated between frames with ffmpeg, the minimum pixel values of the resulting movie taken to create this image. <br> This picture: April 2025 <br> Original video: April 2024 <br> <br> <a href="https://pixelfed.social/discover/tags/abstract?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#abstract</a> <a href="https://pixelfed.social/discover/tags/railwaystation?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#railwaystation</a> <a href="https://pixelfed.social/discover/tags/imageprocessing?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#imageprocessing</a> <a href="https://pixelfed.social/discover/tags/computationalart?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#computationalart</a> <a href="https://pixelfed.social/discover/tags/generativeart?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#generativeart</a> <a href="https://pixelfed.social/discover/tags/stableduffusion?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#stableduffusion</a> <a href="https://pixelfed.social/discover/tags/clip?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#clip</a> <a href="https://pixelfed.social/discover/tags/slitscan?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#slitscan</a> <a href="https://pixelfed.social/discover/tags/amsterdam?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#amsterdam</a> <a href="https://pixelfed.social/discover/tags/amsterdamcentraalstation?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#amsterdamcentraalstation</a>
Paul Houle<p>🥑 NutriTransform: Estimating Nutritional Information From Online Food Posts</p><p>(... reminds me of a startup I worked on in the pre-funding stage that would estimate the nutritional content of a meal from a cellphone snap)</p><p><a href="https://arxiv.org/abs/2503.04755" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2503.04755</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/cs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cs</span></a> <a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://mastodon.social/tags/imageprocessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>imageprocessing</span></a> <a href="https://mastodon.social/tags/computing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>computing</span></a> <a href="https://mastodon.social/tags/food" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>food</span></a> <a href="https://mastodon.social/tags/nutrition" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>nutrition</span></a></p>
Albert Cardona<p><span class="h-card" translate="no"><a href="https://mathstodon.xyz/@NadiaHalidi" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>NadiaHalidi</span></a></span> </p><p>Workshop on True Image Deconvolution, Restoration, and Analysis</p><p>When: March 13th, 2025 – 3 days from now.</p><p>2 hours long: 14:30 - 16:30 CET</p><p>Where: online, via zoom.</p><p>Program: <a href="https://www.crg.eu/en/event/workshop-true-image-deconvolution-restoration-and-analysis" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">crg.eu/en/event/workshop-true-</span><span class="invisible">image-deconvolution-restoration-and-analysis</span></a></p><p>Registration: <a href="https://apps.crg.es/content/internet/events/webforms/workshop-true-image-deconvolution-restoration-and-analysis" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">apps.crg.es/content/internet/e</span><span class="invisible">vents/webforms/workshop-true-image-deconvolution-restoration-and-analysis</span></a></p><p><a href="https://mathstodon.xyz/tags/BioimageAnalysis" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BioimageAnalysis</span></a> <a href="https://mathstodon.xyz/tags/PSF" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PSF</span></a> <a href="https://mathstodon.xyz/tags/deconvolution" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>deconvolution</span></a> <a href="https://mathstodon.xyz/tags/ImageProcessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ImageProcessing</span></a> <a href="https://mathstodon.xyz/tags/microscopy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>microscopy</span></a></p>
nf-core<p>Pipeline release! nf-core/molkart v1.1.0 - 1.1.0 - Resolution Road!</p><p>Please see the changelog: <a href="https://github.com/nf-core/molkart/releases/tag/1.1.0" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/nf-core/molkart/rel</span><span class="invisible">eases/tag/1.1.0</span></a></p><p><a href="https://mstdn.science/tags/fish" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>fish</span></a> <a href="https://mstdn.science/tags/imageprocessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>imageprocessing</span></a> <a href="https://mstdn.science/tags/imaging" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>imaging</span></a> <a href="https://mstdn.science/tags/molecularcartography" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>molecularcartography</span></a> <a href="https://mstdn.science/tags/segmentation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>segmentation</span></a> <a href="https://mstdn.science/tags/singlecell" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>singlecell</span></a> <a href="https://mstdn.science/tags/spatial" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>spatial</span></a> <a href="https://mstdn.science/tags/transcriptomics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>transcriptomics</span></a> <a href="https://mstdn.science/tags/nfcore" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>nfcore</span></a> <a href="https://mstdn.science/tags/openscience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>openscience</span></a> <a href="https://mstdn.science/tags/nextflow" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>nextflow</span></a> <a href="https://mstdn.science/tags/bioinformatics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>bioinformatics</span></a></p>
hesselschutA different perspective<br> <br> Slit scan panorama, Almere Buiten, Netherlands, November 2021<br> <br> <a href="https://pixelfed.social/discover/tags/slitscan?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#slitscan</a> <a href="https://pixelfed.social/discover/tags/slitscanphotography?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#slitscanphotography</a> <a href="https://pixelfed.social/discover/tags/linescan?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#linescan</a> <a href="https://pixelfed.social/discover/tags/linescanimage?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#linescanimage</a> <a href="https://pixelfed.social/discover/tags/slitscangram?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#slitscangram</a> <a href="https://pixelfed.social/discover/tags/abstractstreet?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#abstractstreet</a> <a href="https://pixelfed.social/discover/tags/abstractlandscape?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#abstractlandscape</a> <a href="https://pixelfed.social/discover/tags/urbanphotography?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#urbanphotography</a> <a href="https://pixelfed.social/discover/tags/architecturephotography?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#architecturephotography</a> <a href="https://pixelfed.social/discover/tags/underconstruction?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#underconstruction</a> <a href="https://pixelfed.social/discover/tags/abstractart?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#abstractart</a> <a href="https://pixelfed.social/discover/tags/digitalart?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#digitalart</a> <a href="https://pixelfed.social/discover/tags/creativecoding?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#creativecoding</a> <a href="https://pixelfed.social/discover/tags/imageprocessing?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#imageprocessing</a> <a href="https://pixelfed.social/discover/tags/processing?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#processing</a> <a href="https://pixelfed.social/discover/tags/experimentalphotography?src=hash" class="u-url hashtag" rel="nofollow noopener noreferrer" target="_blank">#experimentalphotography</a>
e11bits :python: :emacs:<p>Anybody got some recommendation for a pipeline to scan old photos? I'm using sane scanimage to batch scan the photos. But after that. Automatic crop, deskew, despeckle, sharpening, colour correction and other magic? <a href="https://fosstodon.org/tags/photo" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>photo</span></a> <a href="https://fosstodon.org/tags/archive" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>archive</span></a> <a href="https://fosstodon.org/tags/linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linux</span></a> <a href="https://fosstodon.org/tags/computervision" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>computervision</span></a> <a href="https://fosstodon.org/tags/imageprocessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>imageprocessing</span></a> <a href="https://fosstodon.org/tags/opensource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>opensource</span></a></p>
Henry Fisher<p>I came across an interesting but absolutely pointless thing.</p><p>It sorts pixels by color from a loaded image.</p><p>You can try it here: <a href="https://solst-ice.github.io/pxl-srt/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">solst-ice.github.io/pxl-srt/</span><span class="invisible"></span></a></p><p><a href="https://dindon.one/tags/Pixels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Pixels</span></a> <a href="https://dindon.one/tags/ImageProcessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ImageProcessing</span></a> <a href="https://dindon.one/tags/ColorSorting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ColorSorting</span></a></p>
TomKrajci 🇺🇦 🏳️‍🌈 🏳️‍⚧️<p>Comet C/2024 G3 (ATLAS) - seen by the C3 coronograph on SOHO.</p><p>Download these images from:<br><a href="https://umbra.nascom.nasa.gov/pub/lasco/lastimage/level_05/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">umbra.nascom.nasa.gov/pub/lasc</span><span class="invisible">o/lastimage/level_05/</span></a><br>...specifically, go to the desired date, and camera (C3), such as:<br><a href="https://umbra.nascom.nasa.gov/pub/lasco/lastimage/level_05/250113/c3/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">umbra.nascom.nasa.gov/pub/lasc</span><span class="invisible">o/lastimage/level_05/250113/c3/</span></a></p><p>Images are in FITS format, which can be processed by GIMP:<br><a href="https://www.gimp.org/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">gimp.org/</span><span class="invisible"></span></a></p><p><a href="https://en.wikipedia.org/wiki/Solar_and_Heliospheric_Observatory" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">en.wikipedia.org/wiki/Solar_an</span><span class="invisible">d_Heliospheric_Observatory</span></a></p><p>The comet is within hours of perihelion.<br><a href="https://en.wikipedia.org/wiki/C/2024_G3_(ATLAS)" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">en.wikipedia.org/wiki/C/2024_G</span><span class="invisible">3_(ATLAS)</span></a></p><p>1st image is in IR. Three tails seen.</p><p>2nd image is visible light. The brightest part of the comet is grossly overexposed, but faint tail details are seen. I enhanced visibility with unsharp masking.</p><p>When you download FITS files, also get a copy of the img_hdr.txt file because it tells you which file was taken with which filter.</p><p>3rd image is what NASA puts up for public consumption, but the scaling/histogram manipulation makes all parts of the comet white and featureless.<br><a href="https://soho.nascom.nasa.gov/data/realtime/c3/1024/latest.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">soho.nascom.nasa.gov/data/real</span><span class="invisible">time/c3/1024/latest.html</span></a></p><p><a href="https://universeodon.com/tags/Comet" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Comet</span></a> <a href="https://universeodon.com/tags/SOHO" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SOHO</span></a> <a href="https://universeodon.com/tags/Coronograph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Coronograph</span></a> <a href="https://universeodon.com/tags/FITS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>FITS</span></a> <a href="https://universeodon.com/tags/GIMP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GIMP</span></a> <a href="https://universeodon.com/tags/ImageProcessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ImageProcessing</span></a> <a href="https://universeodon.com/tags/C2024G3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>C2024G3</span></a> <a href="https://universeodon.com/tags/Astronomy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Astronomy</span></a> <a href="https://universeodon.com/tags/Astrophotography" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Astrophotography</span></a> <a href="https://universeodon.com/tags/Photography" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Photography</span></a></p>
Laurent Perrinet<p><a href="https://neuromatch.social/tags/ConvolutionalNeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ConvolutionalNeuralNetworks</span></a> (<a href="https://neuromatch.social/tags/CNNs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CNNs</span></a> in short) are immensely useful for many <a href="https://neuromatch.social/tags/imageProcessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>imageProcessing</span></a> tasks and much more... </p><p>Yet you sometimes encounter some bits of code with little explanation. Have you ever wondered about the origins of the values for image normalization in <a href="https://neuromatch.social/tags/imagenet" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>imagenet</span></a> ?</p><ul><li>Mean: <code>[0.485, 0.456, 0.406]</code> (for R, G and B channels respectively)</li><li>Std: <code>[0.229, 0.224, 0.225]</code></li></ul><p>Strangest to me is the need for a three-digits precision. Here, after finding the origin of these numbers for MNIST and ImageNet, I am testing if that precision is really important : guess what, it is not (so much) !</p><p>👉 if interested in more details, check-out <a href="https://laurentperrinet.github.io/sciblog/posts/2024-12-09-normalizing-images-in-convolutional-neural-networks.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">laurentperrinet.github.io/scib</span><span class="invisible">log/posts/2024-12-09-normalizing-images-in-convolutional-neural-networks.html</span></a></p>
TomKrajci 🇺🇦 🏳️‍🌈 🏳️‍⚧️<p>Three-day old waxing crescent moon - high resolution stack of 180 images.</p><p>(This is a continuation of my learning that started yesterday: <a href="https://universeodon.com/@KrajciTom/113758881213411285" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">universeodon.com/@KrajciTom/11</span><span class="invisible">3758881213411285</span></a> )</p><p>I am quite surprised at how much detail I could extract from the image stack, given that I was imaging through 2-1/2 air masses with a mediocre quality telephoto lens.</p><p>Screenshots 2, 3, and 4 show how the user of the deconvolution software needs to carefully choose the size of the Gaussian deconvolution kernel (expressed as a radius in pixels).</p><p>This image stack has such a high signal to noise ratio that if I boost contrast and brightness, earthshine is clearly visible with decent detail showing maria, highlands, craters, and ejecta rays. </p><p>"Lucky imaging" techniques and software are practically magic for high-resolution imaging.</p><p><a href="https://universeodon.com/tags/NewMexico" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NewMexico</span></a> <a href="https://universeodon.com/tags/Moon" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Moon</span></a> <a href="https://universeodon.com/tags/Infrared" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Infrared</span></a> <a href="https://universeodon.com/tags/Monochrome" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Monochrome</span></a> <a href="https://universeodon.com/tags/Telephoto" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Telephoto</span></a> <a href="https://universeodon.com/tags/Astronomy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Astronomy</span></a> <a href="https://universeodon.com/tags/Astrophotography" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Astrophotography</span></a> <a href="https://universeodon.com/tags/Photography" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Photography</span></a> <a href="https://universeodon.com/tags/BnW" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BnW</span></a> <a href="https://universeodon.com/tags/Deconvolution" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Deconvolution</span></a> <a href="https://universeodon.com/tags/Math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Math</span></a> <a href="https://universeodon.com/tags/ImageProcessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ImageProcessing</span></a></p>
TomKrajci 🇺🇦 🏳️‍🌈 🏳️‍⚧️<p>New Year, new image processing for sharper moon images.</p><p>Two-day old crescent on the evening of 1 January.</p><p>This is a first attempt at stacking multiple images and then using deconvolution to enhance the finer details.</p><p>I only took a small number of images, but results are promising.</p><p>2nd screen grab shows an analysis of my batch of 31 images. The green line is a plot sorted by image quality and shows that about 20% of my images were of high sharpness...half the images were of medium-low sharpness, and the remaining 30% were pretty bad. (Playing those images in sequence from best to worst was eye opening.)</p><p>In this case I discarded the worst 30% and stacked the remaining images. </p><p>Stacking software: <a href="https://www.autostakkert.com/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">autostakkert.com/</span><span class="invisible"></span></a></p><p>Then I used deconvolution: <a href="https://greatattractor.github.io/imppg/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">greatattractor.github.io/imppg</span><span class="invisible">/</span></a></p><p><a href="https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconvolution" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">en.wikipedia.org/wiki/Richards</span><span class="invisible">on%E2%80%93Lucy_deconvolution</span></a></p><p>Next clear night...take many, many images!</p><p><a href="https://universeodon.com/tags/NewMexico" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NewMexico</span></a> <a href="https://universeodon.com/tags/Moon" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Moon</span></a> <a href="https://universeodon.com/tags/Infrared" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Infrared</span></a> <a href="https://universeodon.com/tags/Monochrome" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Monochrome</span></a> <a href="https://universeodon.com/tags/Telephoto" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Telephoto</span></a> <a href="https://universeodon.com/tags/Astronomy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Astronomy</span></a> <a href="https://universeodon.com/tags/Astrophotography" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Astrophotography</span></a> <a href="https://universeodon.com/tags/Photography" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Photography</span></a> <a href="https://universeodon.com/tags/BnW" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BnW</span></a> <a href="https://universeodon.com/tags/NewYear" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NewYear</span></a> <a href="https://universeodon.com/tags/Deconvolution" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Deconvolution</span></a> <a href="https://universeodon.com/tags/Math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Math</span></a> <a href="https://universeodon.com/tags/ImageProcessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ImageProcessing</span></a></p>
Nicolas Ward<p><strong>Dithered</strong></p><p>Please enjoy one (1) 16-color dithered Pike, pixelated and ready to pounce. Created using <a href="https://maple.pet" rel="nofollow noopener noreferrer" target="_blank">mavica’s</a> <a href="https://maple.pet/ditherinator/" rel="nofollow noopener noreferrer" target="_blank">Ditherinator</a>.</p><p>Also by chance this week my friend Branen recently did a deep dive into the <a href="https://www.etymonline.com/word/dither#etymonline_v_11528" rel="nofollow noopener noreferrer" target="_blank">etymology of “dither”</a> in the image processing context and found this <a href="https://books.google.com/books?id=qexHAQAAMAAJ&amp;newbks=1&amp;newbks_redir=0&amp;printsec=frontcover&amp;pg=PA515&amp;dq=dither&amp;hl=en&amp;source=gb_mobile_entity&amp;ovdme=1#v=onepage&amp;q=dither&amp;f=false" rel="nofollow noopener noreferrer" target="_blank">1912 citation</a> where it meant allowing a little vibration for smooth movement of a piston, and this <a href="https://books.google.com/books?id=JtUXAAAAYAAJ&amp;newbks=1&amp;newbks_redir=0&amp;printsec=frontcover&amp;pg=PA15&amp;dq=dither&amp;hl=en&amp;source=gb_mobile_entity&amp;ovdme=1#v=onepage&amp;q=dither&amp;f=false" rel="nofollow noopener noreferrer" target="_blank">1952 diagram</a> comparing mechanical and electronic dither.</p><p>Or as he summarized:</p><blockquote><p>Dither as a way of making machines work more smoothly -&gt; dither as a way of decorrelating quantization noise from signal -&gt; dither as a way of making digital images look better</p></blockquote><p>That’s pretty neat.</p><p><a rel="nofollow noopener noreferrer" class="hashtag u-tag u-category" href="https://blog.ultranurd.net/tag/ditherinator/" target="_blank">#ditherinator</a> <a rel="nofollow noopener noreferrer" class="hashtag u-tag u-category" href="https://blog.ultranurd.net/tag/image-processing/" target="_blank">#imageProcessing</a> <a rel="nofollow noopener noreferrer" class="hashtag u-tag u-category" href="https://blog.ultranurd.net/tag/pike/" target="_blank">#pike</a> <a rel="nofollow noopener noreferrer" class="hashtag u-tag u-category" href="https://blog.ultranurd.net/tag/pixel-art/" target="_blank">#pixelArt</a> <a rel="nofollow noopener noreferrer" class="hashtag u-tag u-category" href="https://blog.ultranurd.net/tag/retrocomputing/" target="_blank">#retrocomputing</a></p>
Thomas Appéré 🚀🛰📷<p>Fier de découvrir aujourd'hui que le calendrier 2025 de Ciel&amp;Espace comporte un de mes traitements d'images martiens au côté d'images retravaillées par <span class="h-card" translate="no"><a href="https://fosstodon.org/@andrealuck" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>andrealuck</span></a></span> et Sean Doran.<br>Cette image prise par Curiosity sera à retrouver prochainement sur galleryastro.fr si vous souhaitez la commander en poster.</p><p><a href="https://astrodon.social/tags/Mars" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Mars</span></a> <a href="https://astrodon.social/tags/calendrier" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>calendrier</span></a> <a href="https://astrodon.social/tags/calendar" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>calendar</span></a> <a href="https://astrodon.social/tags/imageprocessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>imageprocessing</span></a> <a href="https://astrodon.social/tags/space" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>space</span></a> <a href="https://astrodon.social/tags/science" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>science</span></a> <a href="https://astrodon.social/tags/art" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>art</span></a> <a href="https://astrodon.social/tags/photography" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>photography</span></a></p>
🔏 Matthias Wiesmann<p>So I tried using krita for some basic editing on Mac OS X. For some reason, all the filters and colour controls are disabled. This a) makes no sense b) if there was a reason, I would expect some tooltip or some part of the help system to explain _why_. </p><p>Pro-tip: before write web-pages on what awesome features your software has over photoshop, make sure the most basic features work, because from what I see at the moment, Mac OS X's preview is more powerful. </p><p><a href="https://mastodon.social/tags/imageprocessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>imageprocessing</span></a> <a href="https://mastodon.social/tags/krita" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>krita</span></a> <a href="https://mastodon.social/tags/macosx" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>macosx</span></a></p>
Mäh W.<p>I only had my smartphone camera with me today. But it seems that 2D barcodes are used for rollercoaster train positioning systems nowadays. Wtf? I've only seen this being specified for systems at speeds up to 10 m/s which is likely exceeded here. I think it goes fully around. Any ideas on this fellow nerds?</p><p><a href="https://chaos.social/tags/engineering" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>engineering</span></a> <a href="https://chaos.social/tags/barcodes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>barcodes</span></a> <a href="https://chaos.social/tags/electronics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>electronics</span></a> <a href="https://chaos.social/tags/imageprocessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>imageprocessing</span></a> <a href="https://chaos.social/tags/rollercoasters" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>rollercoasters</span></a></p>
Chris Green<p>I feel like there is an asymmetry in ML image analysis vs synthesis. A convolutional NN doing inference on an image can efficiently evaluate differential operators like gradient, laplacian, etc.<br>But NN image generators can't efficiently do the inverse integration operation when creating images.<br>Are there generative models whose output is something like a Poisson potential that gets fed to a conventional pde solver to produce the output? <a href="https://mastodon.gamedev.place/tags/generativeai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>generativeai</span></a> <br> <a href="https://mastodon.gamedev.place/tags/imageprocessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>imageprocessing</span></a> <a href="https://mastodon.gamedev.place/tags/neural_networks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>neural_networks</span></a></p>
R. L. Dane :debian: :openbsd:<p>Hey <a href="https://fosstodon.org/tags/AskFedi" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AskFedi</span></a> / <a href="https://fosstodon.org/tags/HiveMind" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HiveMind</span></a>,</p><p>I've been given a zip file full of JPEGs of pages of a <a href="https://fosstodon.org/tags/document" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>document</span></a>. I asked for a scan, and was given photos. The photos are ok, but some of the pages are curled, more than the GIMP perspective/3D transform tool can correct for.</p><p>Are there any good document scanning programs for <a href="https://fosstodon.org/tags/Linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Linux</span></a> that I can feed a stack of JPEGs to and get a usable <a href="https://fosstodon.org/tags/PDF" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PDF</span></a> out of? It has to be able to handle some perspective and page curl issues.</p><p><a href="https://fosstodon.org/tags/scanning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>scanning</span></a> <a href="https://fosstodon.org/tags/PDFs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PDFs</span></a> <a href="https://fosstodon.org/tags/ImageProcessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ImageProcessing</span></a></p>
Alex<p>I wrote my first applied math paper! <a href="https://arxiv.org/abs/2410.01799" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2410.01799</span><span class="invisible"></span></a> (apologies for the file size - forgot to downscale the cat pictures before rendering 😅)</p><p>My talk at the 2024 Rust Scientific Computing conference: <a href="https://youtu.be/OOgdR3tHQR4" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/OOgdR3tHQR4</span><span class="invisible"></span></a></p><p>tl;dr Adapting combinatorial limit theory/regularity lemma-esque tricks but using a frame of {-1, 1} instead of {0, 1} gives you very efficient approximations of matrices and tensors. Sorta like a truncated singular value decomposition, but with sign vectors instead of spatially-expensive floating point precision. Tested on machine learning models, but most importantly, my cat Angus!</p><p><a href="https://mathstodon.xyz/tags/math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>math</span></a> <a href="https://mathstodon.xyz/tags/mathematics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>mathematics</span></a> <a href="https://mathstodon.xyz/tags/scientificcomputing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>scientificcomputing</span></a> <a href="https://mathstodon.xyz/tags/Rust" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Rust</span></a> <a href="https://mathstodon.xyz/tags/combinatorics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>combinatorics</span></a> <a href="https://mathstodon.xyz/tags/imageprocessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>imageprocessing</span></a> <a href="https://mathstodon.xyz/tags/algorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>algorithms</span></a> <a href="https://mathstodon.xyz/tags/hpc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hpc</span></a> <a href="https://mathstodon.xyz/tags/highperformancecomputing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>highperformancecomputing</span></a></p>
Julien :ve: 🔭:python:<p>You want to learn astro image processing but don't have astrophotography equipment to capture data?</p><p>You are an astrophotographer who can process data faster than you can capture it? (because ☁️☁️☁️ of course...)</p><p>I'm open to sharing the unprocessed stacked masters of any of the images on my Telescopius profile (<a href="https://telescopius.com/profile/drgfreeman" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">telescopius.com/profile/drgfre</span><span class="invisible">eman</span></a>) under a CC BY-NC-SA 4.0 license (<a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">creativecommons.org/licenses/b</span><span class="invisible">y-nc-sa/4.0/</span></a>).</p><p>There so many ways to process astro images, I'd be really happy to see what others can do with my data.</p><p>If you are interested, DM me and indicate which image you'd like and I will setup a download link.</p><p><a href="https://techhub.social/tags/astrophotography" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>astrophotography</span></a> <a href="https://techhub.social/tags/astrodon" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>astrodon</span></a> <a href="https://techhub.social/tags/ImageProcessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ImageProcessing</span></a> <a href="https://techhub.social/tags/AstroImageProcessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AstroImageProcessing</span></a> <a href="https://techhub.social/tags/CreativeCommons" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CreativeCommons</span></a></p>