Hello there, I'm currently working on a NEXSYS A7 board together with an I2S2 PMOD. I've successfully accomplished hearing the guitar sound coming out of the I2S2 PMOD into my speakers, using an example code I've found for an I2S transceiver. I'm wondering what would be the correct way (VHDL-wise) to create the effects (DSP) on the data. currently, as I've mentioned, the design is a "pass-through" meaning that the sound that comes in immediately comes out. I'm wondering if I should store the incoming data (or some amount of it) in a BRAM of some sort and then apply changes to the stored data prior to transmitting it back. Or maybe I can just pass the incoming data straight through an RTL block of some sort (or a DSP IP) that will modify the data, and then pass it forward to the I2S transmitter? More specifically, I'm wondering what can I do with the I2S data its self. Any VHDL code that can modify (LPF,HPF,Delay..) would be greatly appreciated.
This is in no way I2S specific, and a lot of free IP cores on the net use I2S. The math and structure behind digital signal processing is not easy by any means, so to understand FIR/IIR their working, parameters and so on is more than using some "vhdl code". Devil is in the details. (Oh, Delay is rather easy, basically just like you said). The problem is not in the VHDL specific implementation, but rather in the algorithm. To start I would get some basics on digital signal processing from some book or website and then look at given FIR implementations of LPF/HPF on github/opencores. The problems you will have with correct stability, phase, clipping, normalization and correct parameters for given sampling rate will keep you busy for the beginning and gives for more detailed questions you can ask on the internet.
Of course it is not a simply VHDL code. I took a DSP course at college so, by any means, I know what you mean. What I'm asking for is a snippet of VHDL code just to grasp the idea of what (or more specifically, how) can I modify the I2S data. Without getting into too much detail about IIR or FIR filters, I just want to see some effect being applied to the I2S data. If someone could provide me a VHDL code that takes the I2S data and for example passes it through a FIFO to create a basic delay, even that would be helpful. I just need to see an example of a way to modify the incoming data prior to transmitting it back. Thanks!
I2S to VHDL is simple. Practically no more than a shift register and a L/R clock as an indicator which of the bits belong to left and right. opencores.org has pretty much sample code. Treatment is different: What kind of "effect" do you need? Regarding guitar effects: Beitrag "VHDL-Effektgerät für Gitarre und andere Instrumente"
I've accomplished the I2S part as mentioned in my question. Thing is that currently all incoming data is immediately passed back to the DAC so no you hear what you play. First of all I would like to accomplish any meaningful effect on the data. Whether passing it through an LPF or HPF. If a delay effect is simpler to realize, than a delay effect would be good. Currently I just want to understand how to manipulate the data rather than accomplishing the best implementation of any effect. Once I'll get the idea of how to operate with the I2S incoming data prior to transmitting it back, I'll dive deeper into what is necessary to create the wanted effect. Thanks!
Hey Daniel, this seems like a really open ended question that makes it hard to help. I2S data is just a fixed point integer value that represents an discrete audio sample of a countinous stream (usually seperated into Left/Right or just simply two different streams by the LR clock). The DSP part is what you already mentioned, we have no idea what your "sample" would be able to do, too many external factors determine that. Really have a look at Github/Opencores and look at sample FIR to get an understanding what they expect and what they require. For the first baby steps you could really just use a BRAM as FIFO, store some values in and read them back out later (some relevant time frame for audio) - BAM - you have a delay. Put some multiplier beyond that into the stream and you can control volume (take care of normalization, clipping).
Daniel wrote: > Currently I just want to understand how to manipulate the data rather There are many different possibilities. You may start with the overview shown in Wikipedia: https://en.wikipedia.org/wiki/Sound_effect#Techniques An extremly simple effect would be clipping (mentioned as a basic implementation of "overdrive" in the wikipedia article).
Hello, I'll try to explain myself a little better. What I'm trying to understand about the incoming I2S data is whether I can apply the effects directly on the stream. I've seen some project where AXI was used in order to incorporate an FIR filter in the design, another thing that I didn't quite understand since the AXI stream was established between the I2S receiver and I2S transmitter, why not just output the receiver's data into the FIR and the FIR's output to the transmitter? I mean, we already have I2S protocol, why do we need another communications protocol? (https://www.controlpaths.com/2021/06/28/audio-equalizer-based-on-fir-filters/) Back to what I started writing in the beginning of this reply, I'm confused about the VHDL-wise way to manipulate the I2S data. I believe that any effect should be applied to both L and R signals, but can I simply operate on the L and R STD_LOGIC_VECTORs? Is any modification to the I2S data has to be done in order to manipulate the data? (serdes?) Is there a need for an FFT on the I2S data in order to apply effects to it? That's what I'm trying to understand basically. Without going deep into how good will the result be after applying the effect, I'm trying to understand how to apply the effect. Again, trying to explain a little better, VHDL-wise is it possible to write RTL code that will create an effect on the I2S data? Or do I have to use an IP like MicroBlaze and write C++ for the audio processing? Thanks for any help or advice, relevant VHDL codes will be of a great help!
Noooow we see more of the problems you have. Let's start: Daniel wrote: > I've seen some project where AXI was used in order to incorporate an FIR > filter in the design, another thing that I didn't quite understand since > the AXI stream was established between the I2S receiver and I2S > transmitter, why not just output the receiver's data into the FIR and > the FIR's output to the transmitter? I mean, we already have I2S > protocol, why do we need another communications protocol? > (https://www.controlpaths.com/2021/06/28/audio-equalizer-based-on-fir-filters/) As I2S is a bitserial protocol and you want to do manipulations on one discrete sample (most algorithms seem to do) at some place you would have to do a serial to parallel conversion. With one parallel sample you then can work, and lateron go back to serial and send out the stream. AXI is one choice for the "parallel" streaming interface, but anything is good enough for that (and for the beginning just have 32 data bits and 1 simple "valid sample" signal, AXI is again adding complexity). It makes sense to do this conversion before your digital processing in the rather slow clockspeed of your I2S stream, go to a faster clock domain (WATCH OUT, clock domain crossing is really not simple, first accept the added delay in just one slow I2S relevant clock domain) do all the processing on you samples and at the end, just right before you send out the data go back to serial data. > > Back to what I started writing in the beginning of this reply, I'm > confused about the VHDL-wise way to manipulate the I2S data. > I believe that any effect should be applied to both L and R signals, but > can I simply operate on the L and R STD_LOGIC_VECTORs? > Is any modification to the I2S data has to be done in order to > manipulate the data? (serdes?) Kind of answered above. VHDL is just a hardware description, so after all you need to think about the hardware. And in most relevant applications you would need to process the Left sample differently as the Right sample, and therefore need to seperate them into different processing pipelines before. Yes, the modification is always to the data sample (but to serially modify the bits is most likely just way harder then to sample-based processing). > > Is there a need for an FFT on the I2S data in order to apply effects to > it? No, absolutely not. FFT is interesting, but all LPF/HPF is done through FIR filters. FFT agains adds a lot of processing you would want to avoid for the first steps (and possibly for most audio effects). > > That's what I'm trying to understand basically. > Without going deep into how good will the result be after applying the > effect, I'm trying to understand how to apply the effect. > Again, trying to explain a little better, VHDL-wise is it possible to > write RTL code that will create an effect on the I2S data? > Or do I have to use an IP like MicroBlaze and write C++ for the audio > processing? No, you don't need a microblaze. You can do all the modification in VHDL, but you need to understand the way you modify samples, the way your Hardware actually needs to look, and then find a way to describe that in VHDL (added on the theoretical audio DSP things). But again, if you don't know how the procotol works or a way to get the relevant data in the hardware of your FPGA you will have more problems. If the HUGE entry barrier of VHDL is worth it to you for Guitar Effects is another topic. Besides all the learning aspects of this exercise it makes sense to prototype you audio manipulation somewhere else ('regular' software is great for that) and then port the algorithm part to hardware structure and then into VHDL. > > Thanks for any help or advice, > relevant VHDL codes will be of a great help! And for a third time: Look at I2S examples and at any audio RTL code examples you find in the internet. I think you still have a huge misconception about the way I2S works. We cannot do this work for you.
Hello again, thanks for the detailed explanations. I will look deeper into I2S as you've suggested. Speaking about what you've mentioned regarding the need for serial to parallel conversion, why is it necessary? I get that I2S is a serial communication protocol, why can't I operate on the individual bits? by making them parallel, what do I get out of it? Thanks!
Daniel wrote: > I've accomplished the I2S part as mentioned in my question. So you fully understood the I2S protocol can decode it and loop it back. Daniel wrote: > Thing is that currently all incoming data is immediately passed back to > the DAC so no you hear what you play. Ah ... so you must have a sample based audio data stream. Daniel wrote: > I will look deeper into I2S as you've suggested. To do what? I2S is only a simple serial format and does not state anything about its content. And you say you have looped it back already. Or can we suggest you just have an example design which is already a loop back and you just want to cut the path to drop in your work? I cannot really see th issue about that since there is just a simple uncoded, unscrabled and unzipped stream with each clock. It is like serial RS232, just two channels. Daniel wrote: > I would like to accomplish any meaningful effect on the > data. You will have to learn how to manipulate samples. You basically can do the same as in the analog domain: - multiply with 0.7 - devide by 2 - apply a band limiter - apply a compressor Daniel wrote: > AXI stream was established between the I2S receiver and I2S > transmitter, why not just output the receiver's data into the FIR and > the FIR's output to the transmitter? I guess the filter had an AXI interface. Therefore you have to use AXI. Daniel wrote: > regarding the need for serial to > parallel conversion, why is it necessary? Because I2S is a serial protocol. Therefore you have to use a serial interface. > why can't I operate > on the individual bits? Because it is a binary format where every bit has a different meaning. (hard to believe you do not knwo about binary formats) > by making them parallel, what do I get out of it? a digital value in binary format Did you ever worked with RS232? It is exactly the same, only with more bits.
Daniel wrote: > Hello again, thanks for the detailed explanations. > I will look deeper into I2S as you've suggested. > > Speaking about what you've mentioned regarding the need for serial to > parallel conversion, why is it necessary? > I get that I2S is a serial communication protocol, why can't I operate > on the individual bits? > > by making them parallel, what do I get out of it? > > Thanks! Like 'confused guest said', operating on singles bits one by one means you have to gather new data. Take multiplication, if you do a manual written multiplication on a sheet of paper, you could start the calculation, but you would still after all need a register where you have to save all information. And then process it out is also a lot of overhead. Having a parallel Sample reduces this to just one operation from the view of the Hardware Description. Also, I2C sends the MSB first, so you would have to buffer all information anyway before you can start multiplying, therefore it would be even more complicated than a conversion once from serial to parallel. Hardware is more parallel in Nature, and building the whole processing pipeline for just 1 bit serial is harder and results in a more complicated design.
And as another comment: How would you save your bits in the BRAM as described in your initial post? You would want to utilize all bits of one memory word.
Please log in before posting. Registration is free and takes only a minute.
Existing account
Do you have a Google/GoogleMail account? No registration required!
Log in with Google account
Log in with Google account
No account? Register here.