Critter & Guitari - EYESY Video Synthesizer

Thanks! :pray: Really glad you enjoyed it!

I did some ofLua experiments last year (I’ve got a youtube playlist here) where I adapted some modes from PatchStorage

I really tend to just adapt existing modes, because my coding skills are non-existent! It can take a while trying to figure out what stuff does what, it’s pretty clunky. But on this occasion, it actually came together relatively easily. You’re certainly ahead of me on the coding-front if you do some for your job :grimacing:

On a practical note in terms of using the modes, I often tend to feed specific sound elements into the Eyesy audio input, as opposed to a whole track - it can give you better visual clarity. And the variety you can get by varying the volume going into it in real time (I use a send from my mixing desk for full control) is also really useful. Plus, some good old real-time knob tweaking never fails to liven things up!

2 Likes

Do you sequence/modulate the Eyesy mode parameters via Midi, or do you just tweak the Eyesy knobs manually during video recording?

looking to hire someone to code some custom modes and patches for my eyesy. Please shoot me an email at directornocap@gmail.com I am looking to hire someone asap! I appreciate your time :slight_smile:

hey, y’all. i just posted something similar on critter and guitari but this is nuts

i asked ChatGPT, “can you write me a code for eyesy video synthesizer that is audio reactive and creates psychedelic images?”

and here’s what it spit out (i’ve not had the chance to load it up. it was a curiosity query that i didn’t know would actual produce anything):

#include “ofMain.h”
#include “ofxMaxim.h”

class ofApp : public ofBaseApp {
public:
void setup() {
ofSetBackgroundColor(0);
ofSetFrameRate(60);
ofEnableAlphaBlending();

    // Initialize audio input
    audioInput.setup(0, 2, 44100, 512, 3);
    audioInput.setOutputVolume(0);
    audioInput.setUseTexture(false);

    // Set up the audio analyzer
    analyzer.setup(512, 256, 0);
    analyzer.setAverages(8, 3);

    // Set up the visualizer
    visualizer.setup(ofGetWidth(), ofGetHeight());
}

void update() {
    // Get the audio data
    audioInput.getNextBuffer(audioBuffer, 512, 2);

    // Analyze the audio
    analyzer.calculate(audioBuffer);

    // Map the audio data to visual parameters
    float hue = ofMap(analyzer.getAverage(0), 0, 0.1, 0, 255);
    float saturation = ofMap(analyzer.getAverage(1), 0, 0.1, 0, 255);
    float brightness = ofMap(analyzer.getAverage(2), 0, 0.1, 0, 255);
    float speed = ofMap(analyzer.getAverage(3), 0, 0.1, 0, 5);

    // Update the visualizer
    visualizer.update(hue, saturation, brightness, speed);
}

void draw() {
    // Draw the visualizer
    visualizer.draw();
}

private:
ofxMaxiMix audioInput;
float audioBuffer[512 * 2];
ofxMaxiFFT analyzer;
ofxMaxiFFTOctaveAnalyzer octaveAnalyzer;
Visualizer visualizer;
};

class Visualizer {
public:
void setup(int width, int height) {
ofSeedRandom(42);
mesh.setMode(OF_PRIMITIVE_TRIANGLES);
for (int i = 0; i < 1000; i++) {
mesh.addVertex(ofVec3f(ofRandom(width), ofRandom(height)));
mesh.addColor(ofColor(0, 0, 0, 0));
}
for (int i = 0; i < 2000; i++) {
mesh.addIndex(ofRandom(1000));
mesh.addIndex(ofRandom(1000));
mesh.addIndex(ofRandom(1000));
}
}

void update(float hue, float saturation, float brightness, float speed) {
    for (int i = 0; i < mesh.getNumVertices(); i++) {
        ofVec3f& vertex = mesh.getVertices()[i];
        ofColor& color = mesh.getColors()[i];

        vertex.x += ofSignedNoise(vertex.x / 100.0, vertex.y / 100.0, ofGetElapsedTimef() * speed) * 5;
        vertex.y += ofSignedNoise(vertex.x / 100.0, vertex.y / 100.0, (ofGetElapsedTimef() * speed) + 1000) * 5;

        float alpha = ofMap(ofDist(vertex.x, vertex.y, ofGetWidth() / 2, ofGetHeight() / 2), 0, ofDist(0, 0, ofGetWidth() / 2, ofGetHeight() / 2), 255, 0);
        color.setHsb(hue, saturation, brightness,

not sure why it formats like that. i’ve got it saved on my computer if you have a compiler software. i do not outside of the online eyesy thingamajig

1 Like

Would be great to know if this works… I need to build a bunch of patches in the coming weeks and this could help save some time!

1 Like

Did anyone respond?

Does anyone have patches they can share? Unfortunately I don’t have the bandwidth to learn a programming language at the moment, so I’m reliant on the patch storage site for anything new.
Thanks
Chris

1 Like

How easy it is to record the visuals and put it on one of your Track? I’m new to this but I’m highly interested in this product. Thanks.

Really interested in whether this works. I have been struggling just to find some basic resources for ofLua (I learn best by taking examples and tweaking/deconstructing them) so this could potentially be a huge step forward for me.
Will try to give some AI generated examples a go this weekend.

In order to record the Eyesy video stream you need a video capture card.

the above code example is incomplete, as its not quoted properly…
@PlumWrinkles, can you edit, and put code tags around the whole lot.

also I just tried chatGPT with same query, and its saying, it cannot generate code for external devices…and just gives ‘suggestions’ on how to write this , but no code.

so I tried to coax it into creating something that might work…
by not making it device specific and asking for a general ofx application.
but I could see it was not using live input, nor outputting to a display…
so had to prompt it to make changes.

and this gets to the crux of the issue with chatgpt and coding…

it often gets it wrong … or doesn’t go where you want it to go.

does this make it useless ?

yes, and no…
as a non-developer its really limited in its use…
if its makes mistakes, or goes the wrong way , you’ll have no idea how to get it back on track.
also, you don’t know how/why its making various choices.
e.g. in above example, why use frequencies spectrum over amplitudes?

(in fairness, these are the same issues non-devs face, when they start trying to copy/paste bits of code together… which is kind of what chatgpt is doing)

but as a developer, it becomes much more interesting.
In the above example, I could not get it to create code that would work on eyesy ( I could see that from the code it generated)
I could get it to generate various ‘snippets’ that I could possibly use.

so its possibly interesting as a developer tool.

a code example…
try asking it, “create C++ code for a state variable low pass filter”
and you get something you could use when creating a plugin.
again, its not complete, but its a good starting point…

"wow, this makes coding so much easier!!’
no, not really…
you could ask google the same question, and you would be pointed to code all around the internet, code which has been tested and peered reviewed, as well as articles on how/why it works.

… so really what we are seeing here is just another way of getting to the similar information being presented in a different way.

arguably, is therefore quicker, but more ‘shallow’, which is sometimes useful.

so no, as a developer I have zero fear this will put me out of a job.
… rather its just another tool in the toolbox.

1 Like

This is the problem I was having. Struggling to get much useful resource on Google re. ofLua.
I totally recognise that the “write this complete and fully functional code for me” strategy is pie in the sky at the moment.
But I am reasonably competent in R stats coding now and mostly from taking ideas from existing pieces of code, working out what they do and manipulating them to achieve what I need. So was hoping to explore a similar approach.

2 Likes

interesting… I wonder about license of code it produces, and in typical chatgpt style… it sat on the fence.

now its 100% correct on the first part, and for this simple example it was an easy answer… but the second paragraph shows its could quite easily spit out code that was covered under some existing license.

of course, if you cut n’ paste code from the Internet, there would be the same potential for issue too…

BUT… it’s an interesting question in the future, if AI generates code, who does it belong to?
the person asking the question? or the people who run the AI server?

yeah, this is I believe were we are at today…

thats not unusual… technology is often used first as a tool to supplement what we have, before it can be truly autonomous, used by ‘non-experts’… AI is no different.

:laughing:

I really love chatGPT’s confidence/ positivity :laughing:

another attempt at getting to write the code for the eyesy.

"can you write me code for eyesy video synthesizer that is audio reactive and creates psychedelic images using ofx "

"Sure, here’s an example of an OFX application that generates psychedelic visual effects in response to live audio input using the Eyesy video synthesizer "

gives me something thats not going to run, and doesn’t do what I asked for , but hey… perhaps its can tell me how to fix it :wink:

“where can I get ofxEyesy.h?”

“To use ofxEyesy, you need to download and install it from the Eyesy GitHub repository: https://github.com/Eyesy/ofxEyesy

" I apologize for the confusion. It appears that the ofxEyesy addon was removed from the Eyesy GitHub repository. However, you can still download an earlier version of the addon from the Eyesy website.

Here’s the steps to download and install the ofxEyesy addon:

  1. Go to the Eyesy website: https://docs.rebeltech.org/manuals/eyesy/
    "

of course, this link is invalid too , as lets remember Rebel Tech had nothing to do with Eyesy :wink:

and if you ask ChatGPT who created Eyesy it ‘knows’ this…it knows its C&G not rebel tech.

so yeah, ymmv - its best to ask ChatGPT about things you already know about :laughing:

1 Like

Is this a specialized card other than a common video card? Thanks.

eBay and Amazon have cheap HDMI to USB video capture cards:

I’ve used this type with a Raspberry Pi video synth (not Eyesy though):

In my use case, it was actually more reliable with a MacBook Pro than a much more expensive Blackmagic card.

2 Likes

hmm… i wonder if they locked it down with regard to specific devices.

it worked that only night i tried it.

i still have the code copy/pasted on my computer and can try to post it again

The Eyesi generates an HDMI video signal, so you need an HDMI to USB card. I use a simple AVerMedia GC311 game capture box. It has HDMI in and out, and a USB out.

1 Like

I tried yesterday. Asked it to generate me som Rjoji Ikeda style visuals in ofLua:

– Define the size of the window
local windowWidth = 800
local windowHeight = 600

– Define the number of dots to draw
local numDots = 1000

– Define the size of each dot
local dotSize = 2

– Define the maximum distance between dots
local maxDistance = 100

– Define the colors to use
local backgroundColor = ofColor(0, 0, 0)
local dotColor = ofColor(255, 255, 255)

– Define the function to generate a random position for a dot
local function getRandomPosition()
return ofVec2f(ofRandom(windowWidth), ofRandom(windowHeight))
end

– Define the function to draw a dot
local function drawDot(position)
dotColor:bind()
ofDrawCircle(position, dotSize)
end

– Define the function to draw the dots
local function drawDots()
backgroundColor:bind()
ofDrawRectangle(0, 0, windowWidth, windowHeight)
for i = 1, numDots do
local position1 = getRandomPosition()
local position2 = getRandomPosition()
local distance = position1:distance(position2)
if distance < maxDistance then
drawDot(position1)
end
end
end

– Define the setup function
function setup()
– Set up the window
ofSetWindowTitle(“Ryoji Ikeda-style visuals”)
ofSetWindowShape(windowWidth, windowHeight)
ofSetBackgroundColor(backgroundColor)
end

– Define the draw function
function draw()
drawDots()
end

:man_shrugging:

The HDMI ports on my video card are all outputs so that’s why I need this kind of device?

After that, do I need a specific Software to do the recordings?

1 Like