ChaoticNeutralCzech

joined 2 years ago

You're right. Later in the video, this shot with the same fake film effect appears and that's indubitably AI (look at bottom right):

The video narration implied this is footage from a rare or unfinished film, though.

[–] ChaoticNeutralCzech@lemmy.one 2 points 1 month ago* (last edited 1 month ago) (1 children)

Which spoiler works in Thunder?

Lemmy syntax

    
spoiler Lemmy syntax <content> :::

:::

Reddit syntax: >!>!<content>!<!<

[–] ChaoticNeutralCzech@lemmy.one 5 points 1 month ago (2 children)

Are you implying it's from a stock footage site that used AI? They would definitely get their previews indexed on search engines. Alternatively, it's been generated on request, which would make it impossible to find.

[–] ChaoticNeutralCzech@lemmy.one 1 points 1 month ago (4 children)

I put them in a spoiler. Compliant viewers should hide them by default.

[–] ChaoticNeutralCzech@lemmy.one 5 points 1 month ago (1 children)

The film artifacts are quite unusual (the vertical lines span exactly one frame, one of the lighter spots stays on pretty much the same spot between frames 1 and 2) but I noticed no other red flags. In fact, the hair is very convincing. The eyes seem to reflect different things but her right one is somewhat in the shade and the light source reflection could be different because it's close to her face.

 

Found by @CrayonRosary@lemmy.world: it originates here: Dune by Alejandro Jodorowsky - Teaser Trailer (1976)


Source: used as B-roll in the intro of this video: https://www.youtube.com/watch?v=f8AJk2Sns_k&t=3

Here are individual frames but image search (SauceNAO, Google Lens, IQDB, Yandex) has not been helpful.

Frames




























Transcript: Close-up shot of a woman's face with a neutral expression, short brown '80s hair, lipstick, thick sharp eyeliner and glowing aqua irises. Widescreen with a higher-than-usual amount of film artifacts.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Electric girls on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

This is the last one in the series. Bye!

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Electric girls on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Electric girls on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Gazebo on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: The Infinity Gauntlet on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

1
submitted 3 months ago* (last edited 3 months ago) by ChaoticNeutralCzech@lemmy.one to c/morphmoe@ani.social
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Frostpunk Automaton on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

See also: Land Dreadnought

1
submitted 3 months ago* (last edited 3 months ago) by ChaoticNeutralCzech@lemmy.one to c/morphmoe@ani.social
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Crabsquid on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

See also: Seamoth and other Subnautica creatures in the comments

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: D20 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Knifehead Kaiju on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Robot (vacuum) cleaner on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: The Satellite-girl on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

This is the Horizon satellite from Random-tan Studio's cybermoe comic Sammy, page 18, prior to remastering.

[–] ChaoticNeutralCzech@lemmy.one 2 points 6 months ago* (last edited 6 months ago)

Lethal humanoid monsters, weird voice acting (likely not AI though) and "telephone"-distorted audio (it's not just because I limited the bitrate to 20 kb/s to fit under 10 MiB, the YouTube video is like that). It's an artistic choice but not a very rare one, so likely not directly inspired by H. P. Lovecraft's audiobooks.

[–] ChaoticNeutralCzech@lemmy.one 8 points 8 months ago* (last edited 8 months ago)

You are right, QR codes are very easy to decode if you have them raw, even the C64 should do it in a few seconds, maybe a minute for one of those 22 giant ones. The hard part is image processing when decoding a camera picture - and that can be done on the C64 too if it has enough time and some external memory (or disks for virtual memory). People have even emulated a 32-bit RISC processor on the poor thing, and made it boot Linux.

[–] ChaoticNeutralCzech@lemmy.one 12 points 8 months ago* (last edited 8 months ago) (2 children)

Some of them use bismuth, which is as weakly radioactive as it gets, but why? It's still a heavy metal and might be poisonous if parts of it shed off.

[–] ChaoticNeutralCzech@lemmy.one 1 points 8 months ago (1 children)

Yeah, I'm using Joplin over Nextcloud and it would absolutely be compatible, the Markdown syntax is the same after all.

[–] ChaoticNeutralCzech@lemmy.one 3 points 8 months ago (6 children)

I wouldn't say "shit" but rather niche. Most people who would love a Reddit-like place have Reddit and don't hate it enough to switch, especially since we don't have extensive hobby communities with long history.

[–] ChaoticNeutralCzech@lemmy.one 7 points 8 months ago

In almost all microwaves, the control circuitry or mechanical switches only ever switch 2-3 power circuits: motor+fan(+bulb sometimes separately) and the heating (transformer+diode+capacitor+magnetron) high voltage circuit. It can therefore only switch the heat between 0 and max, usually in a slow (15-30s period) PWM cycle (that hopefully does not coincide with the tray rotation period). The inputs can be manual only, or sometimes there is also a scale, moisture sensor and microphone, along with thermal fuses for safety.

I think the pizza setting is just generic medium one with short 50% cycles to allow the heat to spread. The popcorn setting can be much more interesting:
https://www.youtube.com/watch?v=Limpr1L8Pss

view more: next ›