blog地址:
https://github.com/ictnlp/LLaMA-Omni--
code地址:
https://github.com/ictnlp/LLaMA-Omni--
paper地址:
https://arxiv.org/abs/2409.06666
模型下载
https://huggingface.co/ICTNLP/Llama-3.1-8B-Omni--https://modelscope.cn/models/ICTNLP/Llama-3.1-8B-Omni--
0 简介
像GPT-4o这样的模型通过语音实现与大型语言模型(LLMs)的实时交互,相较于传统的基于文本的交互方式显著提升了用户体验。然而目前仍缺乏如何基于开源LLMs构建语音交互模型的探索。为此提出一种名为LLaMA-Omni的创新模型架构,旨在实现低延迟、高质量的语音交互。LLaMA-Omni集成了预训练的语音编码器、语音适配器、LLM和流式语音解码器,消除了语音转录的需求,能够直接从语音指令中生成文本和语音响应,且延迟极低。模型基于最新的Llama-3.1-8B-Instruct模型构建,为了将模型与语音交互场景对齐,构建了一个名为InstructS2S-200K的数据集,其中包含20万个语音指令及相应的语音响应。实验结果表明,与以往的语音语言模型相比,LLaMA-Omni在内容和风格上均提供了更优的响应,其响应延迟低至226毫秒。此外,训练LLaMA-Omni仅需不到3天的时间,使用4个GPU,为未来高效开发语音语言模型铺平了道路。
LLaMA-Omni是一个基于Llama-3.1-8B-Instruct的语音语言模型。 它支持低延迟和高质量的语音交互,根据语音指令同时生成文本和语音响应。
亮点
- 基于Llama-3.1-8B-Instruct构建,确保高质量的响应。
- 低延迟语音交互,延迟低至226 ms。
- 同时生成文本和语音响应。
- ?仅使用4个GPU,在不到3天的时间内完成训练。
1 模型性能
实验结果表明,与之前的语音语言模型相比,LLaMA-Omni在内容和风格上都提供了更优的响应,响应延迟低至226毫秒。
2 模型结构
仅4块GPU、不到3天训练出「开源版GPT-4o」:LLaMA-Omni模型通过整合语音编码器、语音适配器、LLM和流式语音解码器,实现从语音指令直接生成文本和语音响应,无需转录为文本。
模型采用两阶段训练策略::先训练语音适配器和LLM生成文本响应,再训练语音解码器生成语音响应,整体训练在4块GPU上完成,耗时约65小时,LLaMA-Omni在实验中显示出较低的响应延迟(226ms)和高质量的语音输出,优于其他语音-语言模型,如SpeechGPT。
3 结论
本文提出一种创新的模型架构LLaMA-Omni,它使基于LLMs的低延迟和高质量的语音交互成为可能。LLaMA-Omni 基于最新的 Llama-3.1-8B-Instruct 模型,增加了语音编码器用于语音理解,以及一个可以同时生成文本和语音响应的流式语音解码器。为了使模型适应语音交互场景,构建了一个包含 200K 条语音指令及其语音响应的语音指令数据集 InstructS2S-200K。实验结果表明,与之前的语音语言模型相比,LLaMA-Omni 在内容和风格上提供了更优的响应,响应延迟低至 226 毫秒。此外训练LLaMA-Omni在4个GPU上不到3天的时间,使基于最新 LLMs 快速开发语音交互模型成为可能。探索增强生成语音响应的表现力和改进实时交互能力。
4 快速使用与体验
Local Inference
To run inference locally, please organize the speech instruction files according to the format in the omni_speech/infer/examples directory, then refer to the following script.
bash omni_speech/infer/run.sh omni_speech/infer/examples
import os
import time
import subprocess
import json
import soundfile as sf
import torch
from cog import BasePredictor, Input, Path, BaseModel
from fairseq import utils as fairseq_utils
from fairseq.models.text_to_speech.vocoder import CodeHiFiGANVocoder
from omni_speech.model.builder import load_pretrained_model
from omni_speech.utils import disable_torch_init
from omni_speech.infer.infer import create_data_loader, ctc_postprocess
MODEL_CACHE = "models"
MODEL_URL = (
f"https://weights.replicate.delivery/default/ictnlp/LLaMA-Omni/{MODEL_CACHE}.tar"
)
os.environ["HF_DATASETS_OFFLINE"] = "1"
os.environ["TRANSFORMERS_OFFLINE"] = "1"
os.environ["HF_HOME"] = MODEL_CACHE
os.environ["TORCH_HOME"] = MODEL_CACHE
os.environ["HF_DATASETS_CACHE"] = MODEL_CACHE
os.environ["TRANSFORMERS_CACHE"] = MODEL_CACHE
os.environ["HUGGINGFACE_HUB_CACHE"] = MODEL_CACHE
class ModelOutput(BaseModel):
audio: Path
text: str
def download_weights(url, dest):
start = time.time()
print("downloading url: ", url)
print("downloading to: ", dest)
subprocess.check_call(["pget", "-x", url, dest], close_fds=False)
print("downloading took: ", time.time() - start)
class Predictor(BasePredictor):
def setup(self) -> None:
"""Load the model into memory to make running multiple predictions efficient"""
if not os.path.exists(MODEL_CACHE):
download_weights(MODEL_URL, MODEL_CACHE)
# Model
disable_torch_init()
self.tokenizer, self.model, _ = load_pretrained_model(
f"{MODEL_CACHE}/Llama-3.1-8B-Omni", model_base=None, s2s=True
)
with open(f"{MODEL_CACHE}/vocoder/config.json") as f:
vocoder_cfg = json.load(f)
self.vocoder = CodeHiFiGANVocoder(
f"{MODEL_CACHE}/vocoder/g_00500000", vocoder_cfg
).cuda()
def predict(
self,
input_audio: Path = Input(description="Input audio"),
prompt: str = Input(
default="Please directly answer the questions in the user's speech"
),
temperature: float = Input(
description="Controls randomness. Lower values make the model more deterministic, higher values make it more random.",
default=0.0,
ge=0.0,
le=1.0,
),
top_p: float = Input(
description="Controls diversity of the output. Valid when temperature > 0. Lower values make the output more focused, higher values make it more diverse.",
default=0.0,
ge=0.0,
le=1.0,
),
max_new_tokens: int = Input(
description="Maximum number of tokens to generate", default=256, ge=1
),
) -> ModelOutput:
"""Run a single prediction on the model"""
questions = [
{
"speech": str(input_audio),
"conversations": [{"from": "human", "value": f"\n{prompt}"}],
}
]
data_loader = create_data_loader(
questions,
self.tokenizer,
self.model.config,
input_type="mel",
mel_size=128,
conv_mode="llama_3",
)
(input_ids, speech_tensor, speech_length) = next(iter(data_loader))
input_ids = input_ids.to(device="cuda", non_blocking=True)
speech_tensor = speech_tensor.to(
dtype=torch.float16, device="cuda", non_blocking=True
)
speech_length = speech_length.to(device="cuda", non_blocking=True)
with torch.inference_mode():
output_ids, output_units = self.model.generate(
input_ids,
speech=speech_tensor,
speech_lengths=speech_length,
do_sample=True if temperature > 0 else False,
temperature=temperature,
top_p=top_p if temperature > 0 else None,
num_beams=1,
max_new_tokens=max_new_tokens,
use_cache=True,
pad_token_id=128004,
streaming_unit_gen=False,
)
prediction = self.tokenizer.batch_decode(output_ids, skip_special_tokens=True)[
0
].strip()
output_units = ctc_postprocess(
output_units, blank=self.model.config.unit_vocab_size
)
print(prediction)
print(f"output_units: {output_units}")
print(type(output_units))
output_units = [(list(map(int, output_units.strip().split())))]
x = {
"code": torch.LongTensor(output_units[0]).view(1, -1),
}
x = fairseq_utils.move_to_cuda(x)
wav = self.vocoder(x, True)
out_path = "/tmp/out.wav"
sf.write(
out_path,
wav.detach().cpu().numpy(),
16000,
)
return ModelOutput(audio=Path(out_path), text=prediction)
参考文献:
https://zhuanlan.zhihu.com/p/720048515
Acknowledgements
- LLaVA: The codebase we built upon.
- SLAM-LLM: We borrow some code about speech encoder and speech adaptor.
《完》