Blog
Beneath the Waves AI video and first NFT sale
“Every accomplishment starts with the decision to try.” – John F. Kennedy
Hello there,
Just a few days ago, I shared what I consider to be my best work up to this point. This project represents both my initial venture into creating larger-scale AI videos and my “entry” into electronic music. Despite my limited experience in these areas, I believe the results are interesting.
Here’s the video:
This video is part of the Caerulea Nova series, showcasing the captivating undersea landscapes of this exquisite planet. The composition is divided into four distinct stages, each delving deeper into the ocean’s depths.
One minor setback, however, is the resolution, which unfortunately is somewhat low. The original resolution was merely 768 by 432 pixels, and though I upscaled it to 2k using Topaz Video AI, the sharpness didn’t translate that well (+extra sharpness was applied in Davinci Resolve). This is an area I aim to address before embarking on my next major video project.
For this creation, I solely utilized Davinci Resolve, employing After Effects exclusively to render certain pre-made smoke effects. I’m delighted to report that working with Davinci Resolve has been an immensely superior experience in all respects. It runs smoothly, shows minimal glitches, and crashes exceptionally rarely (unlike After Effects, which tends to crash all the time despite my 64GB RAM), and overall, it’s a superb platform.
For those intrigued by Davinci Resolve and considering giving it a try, I recommend this basic color-grading tutorial. The software is free to download, and while the Studio version does offer extra features, it’s not essential.
As previously mentioned, I did incorporate a selection of effects onto the video, some of which proved very effective while a few appeared slightly out of place. This points to the need for further experimentation as I explore a wider array of effects such as particles, smoke, light effects, morphing, etc.
In other news, I am thrilled to announce my inaugural NFT sale, originating from the Lens to Life collection.
Although the sale value wasn’t particularly high, the significance of this milestone cannot be understated (for me 😃).
Here’s the piece that was purchased:
Brown-Eyed Beauty photo 🌻
At present, my focus is squarely on producing more videos and music, all the while actively honing my skills. I sense that I’m at a turning point.
Caerulea Nova II, Silhouettes of Nature and more…
“Innovation is the ability to see change as an opportunity – not a threat.” – Steve Jobs
Despite losing a week due to an unexpected move prompted by flooding, I’ve managed to release two NFT collections: ‘Caerulea Nova II’ and ‘Silhouettes of Nature’.
‘Caerulea Nova II’ is a continuation of the animated/video collection ‘Caerulea Nova I’, featuring ambient soundscapes that revolve around the theme of a blue alien planet, with a particular focus on floral life. The limitations of Stable Diffusion compelled me to shift from the original idea of producing solely blue-hued flowers to a broader exploration of diverse plants and flora.
Here is the link to the collection – Caerulea Nova II on objkt.com
On the other hand, ‘Silhouettes of Nature’ is a monochrome photography collection primarily focusing on nature and macro shots, interspersed with the occasional wildlife snapshot. This project excites me as black-and-white photography opens up a different array of possibilities, even with my current gear that, while not bad, is arguably not optimal. After all, the Panasonic GH5s is primarily designed for video.
Here are a few sample shots:
You can see the rest here – Silhouettes of Nature on objkt.com
In other news, I’ve decided to wrap up the Japanese Collection (Haruki’s Dawn) following ‘Moonlit Solitude’, given its lack of traction relative to the substantial time investment it requires.
Here’s the latest video from the collection:
And here is a piano version of it:
You can preview the collection here – Haruki’s Dawn on Foundation
While the animations have significantly improved, they remain challenging to produce and lack the desired dynamism. This is particularly evident when compared to initial tests with Deforum/Parseq. As such, I’ve decided to pivot fully to AI video, specifically focusing on text2video and video2video production.
Echo Chamber – Another Japanese music piece with animated AI art
“Lesser artists borrow, great artists steal.” – Igor Stravinsky
Here’s a little update about my recent music piece.
This time around, I put my primary focus on the music, creating the animation only after the music was fully finished. I’m quite pleased with how it turned out. It seems like this approach—music first, animation second—worked well so I want to try it more often.
In terms of the music, I kept it simple with a 15-track piece, highlighting the delay effect (echo/reverberation). My aim was to simulate the echoing sound of a monastery meditation chamber within a cave.
The entire piece was composed within the C In Sen scale to evoke an authentic Japanese feel. I did attempt to use one of the original ancient Japanese scales, Ritsu, but it didn’t resonate with me. As a 5-tone scale that evolves over octaves, it sounded disconnected/strange to my Western ear, making it challenging to compose with.
The instruments I used include:
- Koto, Bass Koto, Shakuhachi, Taikos – all from the Japanese Bundle VST
- Khadki bells (Noah bells from Soundiron)
- Water pipe (Shake from Soundiron)
- Bamboo sticks (Angklung from Soundiron)
- Twine bass (same name, from Soundiron)
As for the animation, I used a Stable Diffusion art with inpainting and added basic effects in Adobe After Effects. These effects included smoke, candle animation with fractal noise/turbulent displace, and opacity flicker on the walls.
Currently, I’m starting my next musical piece for this collection!
My plan is to make it more “familiar” or “common” to Western ears by using a traditional minor scale and a piano. I’ll then blend in Japanese instruments for an exotic touch.
Stay tuned!
A Warm Welcome: My HUG Journey So Far
“The object of art is not to reproduce reality, but to create a reality of the same intensity.” – Alberto Giacometti
Let’s try to catch up with more stuff – I did join the HUG NFT community and was approved (yay!) a while ago.
However, only today I finally got my artist profile sorted out. I faced some technical glitches and couldn’t access my page because of the Metamask. That’s all sorted now though.
While it’s still a work in progress, you can check it out here: Aleksejs Zuravlovs @ HUG
In more exciting news, my art was picked to be featured in the “Code & Culture: The Digital Dialogue Between Beauty & Artificial Intelligence” exhibition in New York City, alongside the works of 23 other HUG artists.
This piece was the chosen one:
The event, hosted privately by Loreal, is set to take place on June 29th. I’m eager to see if this could provide a boost in exposure and traffic for my NFTs.
And I’ve just finished submitting another art piece for HUG’s upcoming collaboration, “Artful Palate: Celebrating Cuisine and Creativity”, set to take place at Botanica Bar SD, in San Diego. This time only 7 artists’ works will be chosen, but the display duration is whopping 3 weeks! Fingers crossed, we’ll see if my submission makes the cut.
Stay tuned for more updates! 😁
Art NFT Collection 3 – Haruki’s Dawn (includes traditional Japanese music)
“Pain is inevitable. Suffering is optional. Say you’re running and you think, ‘Man, this hurts, I can’t take it anymore.’ The ‘hurt’ part is an unavoidable reality, but whether or not you can stand anymore is up to the runner himself.” – Haruki Murakami
The past few weeks have been crazy!
I’ve been so caught up that I couldn’t even write a post about the completion of my second project. A few days ago (or more?), it was finished, but it didn’t quite work out as I had envisioned. Given the timeline, I had to rush to wrap everything up.
Here it is:
One of the primary problems lay with Stable Diffusion (SD) inpainting. Compared to other similar tools AI image generators, SD is cumbersome and doesn’t exactly offer a user-friendly experience. In fact, its inpainting module feels as outdated as the 1998 version of MS Paint. Frequent crashes and bugs, predominantly in “sketch inpaint” mode, only add to the problem.
On the bright side, SD can be integrated directly into Photoshop. This enhances productivity as it leverages the superior tools and efficiency of Photoshop, enabling the generation of higher quality images compared to Photoshop’s Generative Fill (it can be useful to do small fixes, but not to create art, at this point).
Here’s one of the tutorials that can help:
Unfortunately, I was unable to complete it since I have Stable Diffusion installed on another PC, with a better graphics card (3080Ti compared to 2080 on a “work” PC). Currently, I’m trying to find a workaround and waiting for some help from the Discord community.
For those who use Stable Diffusion and understand Russian, I strongly recommend XpucT’s tutorials. As the creator of Deliberate and Reliberate models, he possesses an extensive understanding of the functionality of Stable Diffusion as a whole.
Here’s a good one:
I also vouch for his main development project – software for Windows 10 users. It’s clearly superior to popular software like CCleaner (it cleans way more data, speeds up the PC, and even optimizes Windows 10 layout – pleasantly surprised by this). There is a free version as well, available at https://win10tweaker.ru/
Now, let’s steer back to the project.
When it came to the animations, it was my first attempt at making some kind of story, and it was difficult, given my beginner skills in Adobe After Effects. Here were my first-ever animations, discussed in an older post.
A good learning resource for me has been @0xFramer on Twitter, from whom I learned a great deal.
Here is one of his tutorials:
I transformed a static MJ image into an animated video with sound.
It took me 4.5 hours, and I used AI tools since I still don't know how to use Photoshop.
This is how I did it 👇 pic.twitter.com/9aBoVCShPk
— Framer 🇱🇹 (@0xFramer) April 14, 2023
His tutorials are not overly complicated but, when applied correctly, highly effective.
The learning curve continues as I am aiming to enhance my Adobe After Effects skills by completing a few courses over the coming weeks. Simultaneously, I am going through cinematography guides to build a strong foundation in storytelling, which was pretty much absent in this project 😀
On a positive note, I believe the music this time around was an improvement, and I’m confident that it will continue to get better.
For the Japanese traditional instruments, I used these VSTs:
Instruments of Japan Bundle
Their quality is impeccable, although they are a tad pricey. But well worth the investment, in my opinion.
As of now, I’m crafting another musical piece, this time, with an emphasis on the music, accompanied by a looping animation.
If you haven’t had a chance to view my first big project, you can check it out HERE.
Here’s also the current NFT collection for this project: Foundation – Haruki’s Dawn
Software used:
Stable Diffusion (for art creation)
Topaz Gigapixel AI & Topaz Video AI (for upscaling/downscaling when needed)
Lightroom & Photoshop (for editing the art)
Adobe After Effects (for animation)
Cubase (for music composition)
Davinci Resolve (for final production)
The lesson: Rushing headfirst may not always yield the best results.😀 A little bit of planning and preparation could have made this project much more impactful.