@pavouk100 wrote:
@kyran wrote:
The best way to record trumpets or other brass is using a dynamic mic (sm57) close to the end of the instrument.
😯 – I’m doing exactly the opposite for the trumpets and trombones; large-diaphragm condenser, and put it cca 1m in front of the instrument. There is usually no need for additional reverb mixed in later, and the sound is pretty and natural; however, good sounding room is a must in this case 😉
There is no general rule to all of this 🙂
I mostly don’t have good sounding rooms available to me when recording, so I try to get everything as dry as possible. I try to use dynamics as much as possible, because they contain less room.
But now you’ve got me interested, so the next time I’m going to try your method as well 🙂
(Again: I’m no specialist, I sort of rolled into recording bands, I’m still learning, and as the audio track I posted proves, there’s lots of room for improvement 😉 )
I also suggest you use as little processing as possible on the recordings: no compressors, gates or whatever. You can (and should) do all of this in podium.
I spend lots of time cutting silence from takes (especially on vocals, because you’re most likely going to have to compress the life out of it, and don’t want the room spill in the parts where he/she’s not singing)
The best way to record trumpets or other brass is using a dynamic mic (sm57) close to the end of the instrument.
Flute, acoustic guitar and other “silent” instruments are typically done with condensors.
Also warn the band that the first time you do this the results will be crap. Getting a good recording is only half the work. Mixing down a recorded track is totally different from electronic material.
On the other hand, you’ll only learn this by doing it. After you’ve done your first recording, reread this thread and a lot of it will make sense, because then you’ll have experienced the problems we’re giving solutions to 🙂
Using asio4all you can group soundcards into one “meta device”.
Now unless you have a very good recording studio I’d advise you to work with overdubs.
I’m not a specialist (I normally produce electronic music just like you), but I’ve been asked the same thing a few times as well. What I do is:
I make “live” recordings of all the tracks, where the band plays together in the same room, with one or two mics. The idea is that these tracks will serve as a “click” track for the overdubs.
I then have each instrument tracked seperately to these click tracks.
Finally I mix the track from the overdubs and trash the click track.
The good points is that you can get by with less inputs and mics. You don’t have to worry about mic spill (unless you know what you’re doing you will end up with drum on all tracks otherwise) and you can take your time to get the mic placing and signal levels right.
The downside is that the recording sessions take lots of time this way (during which 90% of the band has nothing to do) and that you don’t really capture a live feel in the performances (depending on what the band wants, this may not be an issue)
Also: have the musicians rerecord every passage you feel is sloppy and try to get two full performances of the guitar and vocals (doubletracking).
Here you can listen to one of the tracks I recorded and mixed this way:
@Lion wrote:
@Zynewave wrote:
If you mean a way to set up quantization or swing to be applied only during playback, then that is not something I have planned.
Thats exactly what I mean. The ReGroove Mixer in Reason4 does this. Where as, it applies the groove (basically midi, in a proprietary extension) to the track without actually altering the midi notes. (while the destructive option is still available)
While I have no idea how reason tackles this, but I think that only applying during playback is quite limiting, because you can only delay notes and not shift them forward.
Sounds great 🙂
@LiquidProj3ct wrote:
@thcilnnahoj wrote:
2. Custom grids, as FL Studio. You can load any midi file in quantizer tool and it will work as a grid (and velocities). Take a look to its help file: http://flstudio.image-line.com/help/html/pianoroll_qnt.htm
I like that FL feature a lot. Interface wise you could replace the first editor box (where you select the grid snap) with a selector for the midi file containing the snap grid.
Making a state of the art time stretch is probably a full time job, why would all those big companies license zplane if it wasn’t.
For me personally, I use timestretch as a special effect, so graininess is actually good in that approach or to clean up slightly off time recordings (a la live’s warping or logic flex time).
When “cleaning” recordings I want minimal artifacts, but these are mostly just minor adjustments, so even with a less than stellar quality algorithm the results should be ok.
Now if you want to stretch out loops over 25% with good quality, you need a good algorithm. I currently do this in a wave editor which uses the zplane algorithms. If this is possible in podium, all the better, but I’d rather have something than nothing, and I think a lot of use cases can be covered with a simple algorithm.
I’m good with creating new ones
I think the dirac2 thing is the easiest one, licensing wise.
Their free version allows one channel operation for both commercial and non-commercial use.
(They say you can use this on multichannel audio by using their object on each channel seperately. The quality of the pro and “more pro” version will most likely be better because they process the channels all at once)
I don’t think the algorithm should be state of the art good, but some timestretching will be nice, especially if it comes with an ableton live like warp marker interface. That allows you to correct minor mistakes in recordings, and those should be so big that you start hearing artefacts.
The frightening thing is that this is annual!
The one-off will probably be even more restrictively priced.
Some other alternatives:
http://www.dspdimension.com/technology-licensing/dirac2/
these guys have a free library (depending on how podium works this might be enough even). The pay for versions are one off fees.
http://www.surina.net/soundtouch/ open source, used in audacity
http://breakfastquay.com/rubberband/ also open source
I wouldn’t mind paying something extra to get timestretch support.
It all depends on how much zplane asks for it I guess.
Clicking in the empty rack editor could show the list to add a parameter.
I’d also like a button to toggle an automation track for a parameter in the rack editor (or right click -> add automation track)
apart from that, no real suggestions: it’s a brilliant addition 🙂
Maybe it’s time to redo some of these video’s. I’m sure some of us could help out.