An error occurred while fetching folder content.
Select Git revision
annotation
-
-
- Open in your IDE
- Download source code
- Download this directory
Mirco Ravanelli
authored and
GitHub
committed
* Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * rename HF's files * fix docstrings * fix args docstrings * fix docstrings * change classes' names * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Refactor HF interface, adapt recipes * Fix docstrings * commonvoice * switchboard * update readme * update readme * update lionk in test file * remove unused space token * update torchaudio * remove deprecated language model path * fix merge * fix vocab * fix switchboard * commit * fix test * fix style * remove unsued hparam * fix consistancy blank_skip_threshold * text frames * CTCPrefixBeamSearcher timestamps * pre-commit * test * test 2 * fix prints * update ctcprefixbeamsearch timestamps * remove frames from prefix bs * ≈Revert "remove frames from prefix bs" This reverts commit 30900d9fb2fc16871c103c2e1b3a755e1765e85d. * remove prefix bs * ≈Revert "remove prefix bs" This reverts commit 2f0c3cd930826c8b23143012bf18c5f76d9198b5. * Revert "update ctcprefixbeamsearch timestamps" This reverts commit ce09e193dc34f2ca65a0ca306230276aeacc599f. * Revert "fix prints" This reverts commit bf360373555855329355c982a80be33eaae41cf6. * Revert "test 2" This reverts commit 84cda948f33936e8fa9199a1c5c09af0c0f80ef8. * Revert "test" This reverts commit f17349fb8255d8123e01fa110d8e1a40807e1bbd. * Revert "pre-commit" This reverts commit 4e1cf0da7b70c7316db5ce10bb72ff0593c7d89b. * Revert "CTCPrefixBeamSearcher timestamps" This reverts commit c3d3cf756b1c099b1919f3f28d42bab819ebfe4b. * Revert "text frames" This reverts commit e67c7619d3b1bc64d087f2e2f8c69d452e82a2d2. * Revert "fix consistancy blank_skip_threshold" This reverts commit f97a391f3574337f968135d41a9c2316014cb885. * Update ctc.py * arg / timestamps * precommit * timesteps -> text_frames * ls seq2seq * transformer ls * fix naming * librispeech * aishell * fix linter * precommit * switchboard * timit * Dynamic batching fixed * authors * fix conformer large * indent * Revert "Fix dynamic batching" (#2173) * update doctest skip * Fix dynamic batching (#2174) * Revert "Revert "Fix dynamic batching" (#2173)" This reverts commit faa5e76c66c54192a46f21d569a87c1aa8e439a3. * Update interfaces.py * Update interfaces.py * Update text_to_sequence.py * fix w2v * aishell * cv * ls transformer * ls ssl * switchboard * timit * precommit * fix indent * fix arg * unit test sorting * unittests * remove if main * Small fixes in averaging checkpoints (#2181) * add ckpt avg unittest * avoid hard-coding number of averages * last fixes * fix recipe test * fix recipe test * convert print into logger * fix transducer recipe * remove typing * fix merge * precommit * Update LibriSpeech.csv * update to new dynamic batching args * Update unstable branch with new commits (#2196) * hyper branch/conf -former fixes * remove ctc.py from doctest * get back ctc.py * remove doctest for torchaudio * adapt gpt recipe * adapt gpt recipe * small follow up fix on openrir * remove doc test (for now) * fix issue greedy search * docstring * pre-commit * Fix issues unstable (#2216) Thank you @Adel-Moumen! I did the tests again and everything works now. As for your points on the recipe tests, I agree. We can eventually do that in another PR. * Fix missing file / import in huggingface_transformers (#2224) * init/imports * comment * add partial import * wav2vec -> wav2vec2 * fix ci * Text based HF (#2214) * add mbart * Add tristage scheduler * Add mbart beam search * Add IWLST recipes * Add new models' inteference interface * Add info of new models * Add nllb scores * Add new models' info * Add test info IWSLT recipe * Add test info IWSLT recipe * add docstrings for S2STransformerBeamSearcher * Update IWSLT recipes * Update IWSLT recipes * fix doctest * add requirements * add protobuf * fix doctest * small fixes * Add protobuf install * Minor reform * Remove protobuf * Fix docstings * Fix docstrings * minor reform * remove labse * change authorship * remove comments * minor changes * change authorship * Fix recipe test * add info * Update README.md * Update README.md * change recipe structure --------- Co-authored-by:Mirco Ravanelli <mirco.ravanelli@gmail.com> Co-authored-by:
Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com> * Neural LM Rescoring (#2187) * baserescorerinterface * add rescorers * first attempt * update code * 1.57 wer * update * update code * update code * docstring example rnn * updata loader * docstring example * tests * docstring example * update * tmpdir * change path * update doc * docstring * docstring args * doctest * fix docstring example * unnittest * interface * yamls update * full_infernece tests * model link * readme * yaml/inference tests * update res * fix wav2vec with wav2vec2 --------- Co-authored-by:
Mirco Ravanelli <mirco.ravanelli@gmail.com> * Add wrappers for Encodec and Vocos vocoders (#2231) * Add wrappers for Encodec and Vocos from Huggingface * Encodec: Add a comment * Encodec/Vocos: Add examples, restructure, fix masks * Vocos: Add a comment about the open pull request * Encodec/Vocos: Add the ability to customize save_path, fix a log message * Encodec/Vocos: Cosmetic changes * Vocos: Cosmetic changes * Encodec/Vocos: Remove the mandatory Vocos requirement * Vocos: Remove vocos from __init__.py * fix init * Vocos: Add a check for vocos in conftest.py * Vocos/Encodec: Update documentation, add bandwidth control * Fix old path in conftest.py * Cosmetic changes * Encodec/Vocos: Add support for embedding vectors * Encodec: Update example * Encodec/Vocodec: Add automatic reshaping, minor cosmetic changes --------- Co-authored-by:
flexthink <flexthink@users.noreply.github.com> Co-authored-by:
Mirco Ravanelli <mirco.ravanelli@gmail.com> * Semantically-Aligned Multimodal Utterance-level (SAMU) pre-training (#2223) * add mbart * Add tristage scheduler * Add mbart beam search * Add IWLST recipes * Add new models' inteference interface * Add info of new models * Add nllb scores * Add new models' info * Add test info IWSLT recipe * Add test info IWSLT recipe * add docstrings for S2STransformerBeamSearcher * Update IWSLT recipes * Update IWSLT recipes * fix doctest * add requirements * add protobuf * fix doctest * small fixes * Add protobuf install * Minor reform * Remove protobuf * Fix docstings * Fix docstrings * minor reform * remove labse * Add attention pooling * Add labse * Add info about SAMU * add iwslt recipes with samu * fix recipe test * fix comments * fix recipe test * change recipe structure * fix test recipe * Add new recipes * minor doctest change * minor doctest change * small changes * add dropbox links --------- Co-authored-by:
Mirco Ravanelli <mirco.ravanelli@gmail.com> * fix norm (#2237) * Discrete SSL (#2233) * clustering training recipies for LibriSpeech for different SSL model * add Discrete Hubert Model * load from HF, fix minor issues * fix hyper-param value * fix precommit * fix flake8 * fix batch_size and n_clus values in hyperparams * fix typos * fix typo and some cleaning * fix precommit * fix device incompatibility and memroty issue * use fit instead of partial fit * add README file * add test recipies * remove unused fields from hparams * fix precommmit-yamllint - extra whitespace * add docstring for load_kmeans for Discrete_hubert.py * add discrete wavlm, wav2vec * avoid docstring testing for discrete_ssl models * fix docstring failed issue * add discrete_interface to conftest.py * fix precommit * Fixes for Encodec (#2240) * Add wrappers for Encodec and Vocos from Huggingface * Encodec: Add a comment * Encodec/Vocos: Add examples, restructure, fix masks * Vocos: Add a comment about the open pull request * Encodec/Vocos: Add the ability to customize save_path, fix a log message * Encodec/Vocos: Cosmetic changes * Vocos: Cosmetic changes * Encodec/Vocos: Remove the mandatory Vocos requirement * Vocos: Remove vocos from __init__.py * fix init * Vocos: Add a check for vocos in conftest.py * Vocos/Encodec: Update documentation, add bandwidth control * Fix old path in conftest.py * Cosmetic changes * Encodec/Vocos: Add support for embedding vectors * Encodec: Update example * Encodec/Vocodec: Add automatic reshaping, minor cosmetic changes * Encodec: Decoupled token extraction, fixed CPU/GPU issues * Encodec: Add renormalization --------- Co-authored-by:
flexthink <flexthink@users.noreply.github.com> Co-authored-by:
Mirco Ravanelli <mirco.ravanelli@gmail.com> * Refactoring of the 'fit_batch' function (#2010) * add dataclass * turn False * remove valid_step * update core.py * update core.py * update core.py * precommit * self.autocast + GradScaler enabled * freeze opt * naming * update core.py * comments * example transducer conformer * update core.py * small changes * naming + skip_grad_nans * doc * check * support cpu training * precision + doctrsting * name * change w2v * restore ckpt * remove file * remove casting * tests * whisper + fix tests * seq2seq ls * update transducer / transformer * remove on_optimizers_step_end + comments * update check yaml * remove default arg * add precision in yamls * add precision inside of the yamls * ckpt and scaler * run_opt outside brain + test * several recipe updates * improve w2v fit_batch fn * add arg * update name * timit * context manager * on_fit_batch_start * update CV * should_step with noam * add flag precision * naming * aishell * aishell * update recipes * so many recipes 0.0 * update recipes * last recipes * zero_grad * fix grad_accumulation_factor * update recipes * update auto_mix_prec flag * remove opt flag test * librispeech * cv ssl * audio mnist / realm * voicebank * fix rescuespeech * fix lr annealing * libritts * multiwoz * slurp nlu * should_step * update yamls * update yaml * update batch smpler tedlium * remove fit batch * precision flag * update sampler * add precision inside of the yamls * run_opt outside brain + test * fix auto_mix_prec flag * docstring * grad acc * failing test * update unittests * update jarod's pr * fix removed avg_checkpoint param * update path * fix some recipe tests * update samu recipe * fix hifigan/IWSLT * tedlium --------- Co-authored-by:
Mirco Ravanelli <mirco.ravanelli@gmail.com> * Refactor Augmentation (#2206) * update * update * change folder * remove unnecesary file * update folder structure * add noise, add rev * augmenter refactor * refactor augment + example in templace * fix tests + linters * address comments * supporting variable-length augmentations in augmenter (e.g., speed change) * lib refactor (splitting time and freq augmentations) * fine tune freq drop * refactor of specaugment (freq-domain) - part 1 * converted specaument (freq domain) * refactor random shift * implemented cutcat, swap, and random selection * extended unittests + small fixes * improvements and fixes in augment * plugged feature augmentation + various fixes and improvements * add sum_batch noise (similat to babble) + various fixes * add drop bit resolution * added coded augmentation * added more unittests * restore all augmentations * making AddReveb more similar to AddNoise * fix device mismatch + fix last batch management * add workes to speed up AddNoise and AddRev * improve comments in template yaml * speed up template (sorting dev and test) * extend augmenter by adding activation provability * implemented enable augmentation flag (useful of hparam tuning) + other improvements * plugged coded augment * fixed coded augment * remove old files * fix integration test * remove knowledge distill TIMIT reicpes. Too many yaml files to maintain * convert TIMIT * fix recipe * converted templates using EnvCorr * converted voxceleb * converted GSC + fixes on voxceleb * convrted UrbanSound8k * converted voicebank * converted other recipes * converted CommonLanguage, VoxLingua, timers-and-such * converted all recipes using envcorr * CommonVoice * REAL-M * Aishell1Mix * LibriMix * converted all recipes! * fix linters - part1 * fix linters - part2 * add a note in the template regarding augmentation * fix docstring tests * fix yamls * remove coded tests from docstring * revised coded tests * fix identation in codec.py * try to fix doc issue * revise lib header in codec.oy * fix doc * fix doc attempt * rename sections * fix doc * fix (most) recipe tests * fix other recipe tests * address comments * fix yaml * fix * convert recipe * fix recipes * fix aug in rescoring recipes * Delete tmpdir_vocoder directory * Refactor Inference (files and folders) (#2252) * refactor inference files and folders * fix some tests * fix some tests * fix doctest * import lib * small fixes * Fix beam search (#2253) * fix starting pos prefix_length * block path ctc + fix default value to the old one * fix issue with score being -inf * remoev print * precommit * Fix ctc beam search (#2263) * fix logprobs / space_token / warnings * fix space_token * pre-commit * space_token * simplify parameters * simplify yamls * remove comma * update beam search * fix vocab/str (#2265) * Fix blank index ctc (#2266) * update blank_index * whisper * revert change * mistake * Cv unstable merge (#2254) * add fr preproccesing to Common_voice_prepare.py * add CV , CTC, new languages * fix precommit and test * add transducer recipie * add transformer recipies * update augmentation of CTC recipies * update seq-to-seq recipies * fix whisper HF interface bug. (return str insted of list) * fix recipe tests * add fr preproccesing to Common_voice_prepare.py * add CV , CTC, new languages * fix precommit and test * add transducer recipie * add transformer recipies * update augmentation of CTC recipies * update seq-to-seq recipies * fix whisper HF interface bug. (return str insted of list) * fix recipe tests * modify beamsearch for CTC: ar.es.pt and zh-CN * fix interface conflict * fix transducer interface bug --------- Co-authored-by:
Mirco Ravanelli <mirco.ravanelli@gmail.com> * Add warnings and fix numba (#2271) * upperbound torch/tochaudio + remove opt dependancy * add back automix/bf flags * linters * oops * transformers back * test requirements * Fix Bug: CommonVoice Transformer Bug loading correct optimizer (#2278) * fix trnsfrm bug to load correct opt:adam vs sgd * add data_root to the path of common_voice_prepare.py * add epoch/_counter pretrainer to fr and it recepie * revert releative path change * fix opt bug without the need to add epoch_ckpt * add log and delete launch file * update the log message * update WeightedSSLModel (#2272) * update WeightedSSLModel * requirements.txt * fix pre-commit * Sg/dac (#2246) * introducing DAC * lint errors * black * documenttion * remove unused init file * Fixing tests * More doc strings * More doc strings * PR review * PR review * PR review * Update dac.py * Update dac.py * Update dac.py * make doctests smaller to avoid memory issues in CI * even smaller tests --------- Co-authored-by:
Shubham Gupta <shubhamgupta@Shubhams-MacBook-Pro-2.local> Co-authored-by:
Mirco Ravanelli <mirco.ravanelli@gmail.com> * add quantization recipies fro IEMCAP, CV, LibriSpeech and LJSpeech (#2255) * add quantization recipies fro IEMCAP, CV, LibriSpeech and LJSpeech * update discrete_ssl models * add iemocap_prepare to main folder + add test * ix test for iemocap * fik typos * fix test recepies, minor dormat editting * fix typo in coomonvoice.csv * fix typo in yaml file * fix doctests (those that we do not run in the CI) --------- Co-authored-by:
Mirco Ravanelli <mirco.ravanelli@gmail.com> * change emdedding type from long to float to vaoid getting al zeros embedding (#2292) * Update CVSS (#2285) * Update CVSS * Update train_fr-en.yaml * Update train_fr-en.yaml * Update HF interface (#2293) * RNN Tranducer Numba Loss: Add FP16 and BF16 support (code from Samsung AI Cambridge) (#2296) * Make lobes use fp32 when AMP is active (#2295) * Added utils.autocast with a fwd_default_precision function * Decorate all lobes to require float32 precision in AMP * Fix trailing space in docstring * Less confusing doc for fwd_default_precision * Be explicit that only fp inputs are affected by fwd_default_precision * Typo in docstring * Remove dtype annotation that is broken for some reason * Precommit checks will be the end of me * Fix tests * Add docstring to precision wrapper function * Fix style check again.. * adding support for fp16 transducer loss numba * adding support for fp16 transducer loss numba * fix fp16 transducer recipe * add note on half precision --------- Co-authored-by:
asu <sdelang@sdelang.fr> Co-authored-by:
Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by:
Mirco Ravanelli <mirco.ravanelli@gmail.com> * Fix recipe tests for TransformerASR (#2282) * fix position embedding (#2283) * fix position embedding * use speechbrain internal postional encoding and generate mask from sequence lengths * call mask function from core for tacotron * minor fix * fix device * reduce training epochs * update links --------- Co-authored-by:
Mirco Ravanelli <mirco.ravanelli@gmail.com> * Gradscaler flags (#2281) * add flags for gradscaler * add check_loss_isfinite * update dict * typo * remove default * better message * fix pre-commit * remove checks * remove new arguments --------- Co-authored-by:
Mirco Ravanelli <mirco.ravanelli@gmail.com> * add llama2 recipies (#2299) * add llama2 recipies * fix symbolic links * fix bug * remove unneccary input in docstring * fix typo * cleaning llama2 recepies * update readme * update interface and add licence to readme * fic doc string * fix precommit * fix extra-dependency * remove commented lines * inter epoch checkpoint * minor fixes * add extra req info in llama.py * fix linters --------- Co-authored-by:
Mirco Ravanelli <mirco.ravanelli@gmail.com> * small fixes * make all recipes cpu-compliant + make recipe tests passing on both cpu and gpu * fix some broken links * remove link to private HF repo * remove link to private HF repo * fix libritts recipe test * fix ljspeech recipe test * Streamable Conformer-Transducer ASR model for LibriSpeech (#2140) * Introduce DCT+DCConv logic * DDP fix? * Batch of changes and things brought back * Streaming fixes (successfully trains) * WIP streaming code * WIP functional streaming code * Fix left context * Fix formatting * Cleanups and docs in streaming utils * Better comment hparams, change seed back to orig, improve naming * uncomment averaging stuff; it was some ipython issue * Remove pin_memory as it was not beneficial * More cleanups, comments on context stuff * More comments and TODOs * encode_streaming docstring * Dirty TransducerBeamSearcher change for streaming GS * Fix precommit * Fix encoders that do not support chunk_size * Pre-commit again * Make chunk_size type consistent * Fix formatting of doctest in split_wav_lens * Remove outdated TODO * Add hasattr streaming to retain model backcompat * Cleanup doc and naming for transducer_greedy_decode * Cite paper for chunked attention * Remove lost comment * Update comment in self-attention * Don't apply masked fill fix in the non-bool mask case * Added TODO README update * Revert change to custom_tgt_module; patching model instead * Remove added entry in README * Fix streaming conformer conv mismatch * More conformer conv adjustments * Adjust context size * Remove outdated comment * Fixed causal conformer decoder * Fix linting * Gate `custom_tgt_module` creation behind the presence of decoder layers * Re-enable checkpoint averaging * Change averaged ckpt count to 10 * Add new model results to README * WIP refactor: Introduce DCTConfig dataclass * Improved notice in README * Formatting and linting fixes * Attempt at fixing circular import? * utils can't depend on core it seems; move dct * Whoops, missed file * Add DCT test, fix issues * Remove now obsolete yaml variables for streaming * Formatting * Add dummy dct_config parameter to keep unsupported encoders working * Linting fix * Fix typo * Add note on runtime autocast accuracy * Fix very bad typo from refactor in YAML * Fix hasattr streaming check * Remove legacy comment * Fix left context size calculation in new mask code * Fix causal models in TransformerASR * Remove comment on high-level inference code * YAML formatting + commenting dynchunktrain stuff * Remove outdated comment about DCConv left contexts * Remove commented out debug prints from TransformerASR * Move DCT into utils again * Rename all(?) mentions of DCT to explicit dynamic chunk training * Clarify padding logic * Remove now-useless _do_conv, fix horrible formatting * Slightly fix formatting further * Add docstrings to forward_streaming methods * Add a reference on Dynamic Chunk Training * Rework conformer docstring docs * Update conformer author list, fix doc formatting for authors * Fix trailing whitespace in conformer * Improved comments in Conformer.forward * Added random dynchunktrain sampler example * More explicit names for mask functions in TransformerASR * Added docstring example on encode_streaming * Pre-commit fix * Fix typo in conformer * Initial streaming integration test * Precommit fix * Fix indent in YAML * More consistent spelling in streaming integration test * Update CommonVoice.csv * Add KenLM n-gram training recepie (#2304) * add kenlm training * fix precommit * update readmefile with new result * fix pre-commit * fix typo * fix commit reviews * fix bug in testing * add docstring and fix indentation * fix bug in ASR interface * change encoderasr interface to support ctc beam * add suppourt fro kenlm in enoderasr interface * fix typo * little changes in REAMDE files to improve clarity) * use binaries sources in bashrc * fix trailing-whitespace --------- Co-authored-by:
Mirco Ravanelli <mirco.ravanelli@gmail.com> * Create Performance file (automatically) (#2314) * add performance readme builder * update recipe csv files * update README files * add not in prerelease test * added performance.md * fix linters * update info in README * Llama2 interface bug (#2318) * fix llama2 interface bug * fix minor bug * update multiwox.csv with correct db and HF link * New README file (#2315) * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Optimize masked Dynamic Chunk Convolution (#2308) * Reorganized some conformer convolution module to be faster * Completely get rid of the list of slices in the conformer conv module * Fix linter check * Remove unused variable * More unused variables.. * Remove unused import * Add conformer streaming code path test * Fix test formatting * small fixes in tests * Update RNNLM.yaml * BayesSpeech (#2326) * Create train_bayesspeech.py * Create bayesspeech.yaml * Update README.md * Update LibriSpeech.csv * add extra-req --------- Co-authored-by:
Mirco Ravanelli <mirco.ravanelli@gmail.com> * adding new controllable exp scheduler * adding new controllable exp scheduler * update performance file * Update PERFORMANCE.md * Update README.md --------- Co-authored-by:
mhn226 <mhn.22692@gmail.com> Co-authored-by:
Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com> Co-authored-by:
Adel Moumen <adelmoumen.pro@gmail.com> Co-authored-by:
Ha Nguyen <43038599+mhn226@users.noreply.github.com> Co-authored-by:
flexthink <1496671+flexthink@users.noreply.github.com> Co-authored-by:
flexthink <flexthink@users.noreply.github.com> Co-authored-by:
Pooneh Mousavi <moosavi.pooneh@gmail.com> Co-authored-by:
shubham-gupta-30 <127571426+shubham-gupta-30@users.noreply.github.com> Co-authored-by:
Shubham Gupta <shubhamgupta@Shubhams-MacBook-Pro-2.local> Co-authored-by:
Parcollet Titouan <parcollet.titouan@gmail.com> Co-authored-by:
asu <sdelang@sdelang.fr> Co-authored-by:
Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by:
Luca Della Libera <34525085+lucadellalib@users.noreply.github.com> Co-authored-by:
Yingzhi WANG <41187612+BenoitWang@users.noreply.github.com> Co-authored-by:
BenoitWang <wangyingzhi666@gmail.com>
Name | Last commit | Last update |
---|---|---|
.. |