Add support for Ascend NPU
#84 opened about 7 hours ago
by
statelesshz
RuntimeError: DefaultCPUAllocator: not enough memory: you tried to allocate 224395264 bytes.
#82 opened 7 days ago
by
0xrk
[AUTOMATED] Model Memory Requirements
#81 opened 15 days ago
by
model-sizer-bot
pytorch1.12
1
#80 opened 21 days ago
by
zxyy123
Any support for adding system instructions similar as chatGPT?
#79 opened 25 days ago
by
timpan
apply_query_key_layer_scaling 的用途是什么呢
1
#78 opened 28 days ago
by
yywind
Update tokenizer_config.json
#77 opened about 2 months ago
by
LeeSAIF
有ft加速版吗,很需要,万分感谢
#76 opened about 2 months ago
by
chaochaoli
what is the multi_query_group_num in config.json?
#75 opened about 2 months ago
by
lg920810
关于chatglm2-6b进行P-tuning v2出现效果不佳情况的一个记录或讨论
#74 opened about 2 months ago
by
LittleGreen
如何在创建模型时就使用int4?
2
#73 opened about 2 months ago
by
shamankk
分词器tokenizer错误,无法识别到eos字符
#72 opened about 2 months ago
by
zheng-nlper
compatible with DirectML
1
#71 opened about 2 months ago
by
davinwang
如何将本地文本文件制作成.bin的模型文件?
1
#70 opened about 2 months ago
by
shantone
微调后推理报错 IndexError: piece id is out of range.
3
#69 opened about 2 months ago
by
lyy0905
chatgml2_6b
#68 opened about 2 months ago
by
dailywsx
fix modeling_chatglm.py error: `-` operator, with a bool tensor is not supported
#67 opened about 2 months ago
by
shibing624
多卡微调,位置编码显示shape错误,单卡可正常运行
#66 opened about 2 months ago
by
smiling-xu
add get_output_embeddings()
#65 opened about 2 months ago
by
ranchlai
add get_output_embeddings()
#64 opened about 2 months ago
by
ranchlai
如何拿到输入句子的embedding
2
#59 opened 2 months ago
by
GuiltyInori
尽情->敬请 用尽情期待多少感觉有点怪?
#58 opened 2 months ago
by
fj4444
about underscore character
1
#57 opened 2 months ago
by
captainst
You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.
#56 opened 2 months ago
by
EthanMiao
模型并行出错并给出修改方案
1
#54 opened 2 months ago
by
yuanzhoulvpi
有无最大生成长度,突然停止生成
3
#53 opened 2 months ago
by
Squidwargg
模型使用ptv2微调后其他能力下降该如何解决
2
#52 opened 2 months ago
by
couldn
微调后模型预测一直为空,我是使用readme中的参数微调的,但是使用的基座模型是量化4版本,不知道是不是这个原因
4
#50 opened 2 months ago
by
couldn
add set_input_embedding to support resize token embedding
#49 opened 2 months ago
by
Yes365
Update README.md
#48 opened 2 months ago
by
Arjun2001
Update README.md
#47 opened 2 months ago
by
Arjun2001
请问,chatGLM-6b自动更新后就是chatGLM2-6b吗?
1
#46 opened 2 months ago
by
lysen963
Openllm调用GLM
#45 opened 2 months ago
by
cat13
有没有quantization_kernels.c和quantization_kernels_parallel.c文件下载
3
#43 opened 2 months ago
by
haaaaaaaa1
建议forward函数参数增加full_attention_mask
#35 opened 3 months ago
by
zkwhandan
为啥Model Database的Hosted inference API不能直接用哇
#34 opened 3 months ago
by
chrisliang
GGML version please!
2
#33 opened 3 months ago
by
Hoioi
BatchEncoding error
#32 opened 3 months ago
by
thirdHand
when use oobabooga/ext-generation-webui load this mode ,the chat message was blank and the terminal reponse error:IndexError: list index out of range
3
#29 opened 3 months ago
by
elven2023
Utilizing chatglm2-6b for Downstream Classification Tasks
#28 opened 3 months ago
by
CeroShrijver
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
4
#24 opened 3 months ago
by
lizongran
unable to access 'https://Model Database.co/THUDM/chatglm2-6b/': Recv failure: Connection reset by peer
8
#22 opened 3 months ago
by
RogerChen
tokenizer.bos_token_id is None
4
#20 opened 3 months ago
by
yourui
ptuning微调报错'ChatGLMModel' object has no attribute 'prefix_encoder'
3
#19 opened 3 months ago
by
moumouliu