Happy Easter Everyone,
I hope that you are doing great this holiday season. I am Rino Jonathan, I am really eager to work on the project titled ' Property editor for creating transfer-learning ML models ' .
In this project we aim to integrate External websites used for Personal Image Classification and Personal Image Classification to our core MIT web-app using property editors. The main goal here is to streamline the workflow involved with training these models and make them easy to use
If you have any suggestions or feedback regarding my proposal or the project in general, Feel free to share your thoughts . Your feedback means a lot to me
Hi, I have a doubt, it would be really great if anyone can clear it
The Personal Audio Classifier website uses a api to convert audio to spectrogram
Is there any reason we preferred to use a separate api instead of using a library like wavesurfer.js to get spectrogram from client side ?
and why did we decide upon getting just a 1 second long recording from user instead of using a longer one?
P.S: here is a link to the site i am discussing
IIRC the issue at the time was the precision of the computations done in the browser versus on a dedicated server. I would love if we could come up with a solution so that it could run entirely in the browser and eliminate the external dependency.
1 Like
Yeah, that would be fun to work on too
and we might also be able to get a longer audio recording from the user
We can take a look at wavesurfer or p5.js 's spectogram libraries. will have to compare which one would be good to use
https://wavesurfer.xyz/examples/?spectrogram.js
Above is a little sample from wavesurfer's side