11/12/2022 0 Comments Spectralayers pro tutorial![]() Some others have pointed for scripts (user selected) that would permit sequenced passes of certain algorithms for best results. To have a process that receives a user selected sample and reading it, automatically selects the best unmixing algorithm so then it could apply certain values at critical articulation points of the selected algorithm for best results. To concentrate in distinctive guitar timbres as a start (like the metal string example), might result in better unmixing outcomes. The ideas shared here come as what us long time users might have thought at some point to improve this tool, and they look interesting indeed. Then, novel ideas might come from out-of-the-box views. Good that you acknowledge the outsider’s view problem. But, I wouldn’t want to come to that conclusion without asking the question. My initial impressions are that it probably can’t be done - at least not without more access to the AI engine. The other layer would include the rest of the orchestra as well as the guitar. The program would automatically produce four layers - piano, drums, bass, and other. Say, for example, I have a CD of a guitarist playing with an orchestra with no vocals. #Spectralayers pro tutorial manual#My interest would be in separating guitar out of the “other” category." I can see that there is not an automated way to do that, but I am wondering if there is a manual way to do it. And from the YouTube videos I’ve seen, it appears to do that quite well. #Spectralayers pro tutorial pro#Usually, these are the sort of things that only the developers and super-users have insight into.įrom the tutorials I’ve watched, Pro 7 provides users a simple-to-use, automated capability of unmixing audio into 5 layers: vocals, piano, drums, bass, and “other” (with other being everything that is not vocals, piano, drums, or bass). The reason for asking my question on this forum is that I am trying to learn, not how well SpectraLayers Pro 7 does what it is advertised to do, or whether it does that function better than other similar products in the marketplace, but whether it can do something it was not designed to do and is not advertised as being able to do. And I hope to do that in the very near future. I agree that the best way to find out about a product is to try it. But I’m wondering if I might be able to use the Pro 7 functionality to go to a new level? I also have used the spectral display to look at the progression of notes in a run or slides, etc. I currently use Audacity to slow the tempo of a song without changing the pitch. To be clear, I have not yet tested the product and have not yet purchased it. Or is there a way to use reverse the unmix process (for example, to teach Pro 7 to recognize the components of the specific instrument)? It seems like it would be easy enough to select the fundamental frequency of the guitar part in a song using the manual selection tools, but is there a way to simultaneously select the associated harmonics that may be overlapping with other instruments in the mix? I’m wondering if it is possible to create a unique layer or track for the guitar by using the AI features in Pro 7 or by manually selecting the guitar part in the spectrum using the available selection tools? They may be either acoustic (classical) or electric (typically without distortion). Generally, the guitar recordings I transcribe are clean with little effects other than a small amount of delay or reverb. guitar with orchestra or with a band) for purposes of picking up subtle clues that help me determine the exact notes that are being played as well as the position on the fingerboard. My particular interest is in isolating the guitar parts in instrumental recordings (e.g. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |