name | about | labels |
---|---|---|
Bug Report | Use this template for reporting a bug | kind/bug |
llama2_70b分布式评测,如果把配置文件中的use_past改为True,评测失败,会报ReshapeAndCache算子错误,改成False,可以正常评测,但是评测速度很慢,2个多小时才跑一半都不到
模型仓地址:https://gitee.com/mindspore/mindformers/blob/dev/docs/model_cards/llama2.md#%E8%AF%84%E6%B5%8B
Ascend
/GPU
/CPU
) / 硬件环境:Please delete the backend not involved / 请删除不涉及的后端:
/device ascend/
CANN:Milan_C17/20240414
MS:master_20240506061517_d8802c69db29
MF:dev_20240506121520_6ffde9b33612a3
PyNative
/Graph
):Please delete the mode not involved / 请删除不涉及的模式:
/mode pynative
/mode graph
用例仓地址:MindFormers_Test/cases/llama2/70b/train/
用例:
test_mf_llama2_70b_eval_squad_8p_0001
网络评测成功
2024-05-07 14:56:55,871 - mindformers[mindformers/trainer/utils.py:345] - INFO - .........Building model.........
[CRITICAL] ANALYZER(935442,ffffaac83020,python):2024-05-07-14:57:09.610.720 [mindspore/ccsrc/pipeline/jit/ps/static_analysis/prim.cc:1263] CheckArgsSizeAndType] For Operator[ReshapeAndCache], slot_mapping's type 'None' does not match expected type 'Tensor'.
The reason may be: lack of definition of type cast, or incorrect type when creating the node.
This exception is caused by framework's unexpected error. Please create an issue at https://gitee.com/mindspore/mindspore/issues to get help.
2024-05-07 14:57:20,216 - mindformers[mindformers/tools/cloud_adapter/cloud_monitor.py:43] - ERROR - Traceback (most recent call last):
File "/home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/tools/cloud_adapter/cloud_monitor.py", line 34, in wrapper
result = run_func(*args, **kwargs)
File "run_mindformer.py", line 41, in main
trainer.evaluate(eval_checkpoint=config.load_checkpoint)
File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/_checkparam.py", line 1372, in wrapper
return func(*args, **kwargs)
File "/home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/trainer/trainer.py", line 603, in evaluate
compute_metrics=self.compute_metrics, is_full_config=True, **kwargs)
File "/home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/trainer/causal_language_modeling/causal_language_modeling.py", line 161, in evaluate
**kwargs)
File "/home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/trainer/causal_language_modeling/causal_language_modeling.py", line 229, in generate_evaluate
transform_and_load_checkpoint(config, model, network, dataset, do_eval=True)
File "/home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/trainer/utils.py", line 346, in transform_and_load_checkpoint
build_model(config, model, dataset, do_eval=do_eval, do_predict=do_predict)
File "/home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/trainer/utils.py", line 470, in build_model
model.infer_predict_layout(*next(dataset.create_tuple_iterator()))
File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/train/model.py", line 1896, in infer_predict_layout
predict_net.compile(*predict_data)
File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/nn/cell.py", line 997, in compile
jit_config_dict=self._jit_config_dict, **kwargs)
File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/common/api.py", line 1642, in compile
result = self._graph_executor.compile(obj, args, kwargs, phase, self._use_vm_mode())
TypeError: For Operator[ReshapeAndCache], slot_mapping's type 'None' does not match expected type 'Tensor'.
The reason may be: lack of definition of type cast, or incorrect type when creating the node.
----------------------------------------------------
- Framework Unexpected Exception Raised:
----------------------------------------------------
This exception is caused by framework's unexpected error. Please create an issue at https://gitee.com/mindspore/mindspore/issues to get help.
----------------------------------------------------
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/ccsrc/pipeline/jit/ps/static_analysis/prim.cc:1263 CheckArgsSizeAndType
----------------------------------------------------
- The Traceback of Net Construct Code:
----------------------------------------------------
# 0 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama.py:356
if self.use_past:
# 1 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama.py:357
if not isinstance(batch_valid_length, Tensor):
# 2 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama.py:359
if self.training:
# 3 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama.py:362
tokens = input_ids
^
# 4 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama.py:365
if not self.is_first_iteration:
# 5 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama.py:369
if pre_gather:
# 6 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama.py:367
output = self.model(tokens, batch_valid_length, batch_index, zactivate_len, block_tables, slot_mapping)
^
# 7 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama.py:193
if self.use_past:
# 8 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama.py:194
if self.is_first_iteration:
# 9 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama.py:195
freqs_cis = self.freqs_mgr.prefill(bs, seq_len)
^
# 10 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama.py:194
if self.is_first_iteration:
# 11 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama.py:367
output = self.model(tokens, batch_valid_length, batch_index, zactivate_len, block_tables, slot_mapping)
^
# 12 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama.py:206
for i in range(self.num_layers):
# 13 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama.py:207
h = self.layers[i](h, freqs_cis, mask, batch_valid_length=batch_valid_length, block_tables=block_tables,
^
# 14 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama_transformer.py:500
if not self.use_past:
# 15 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama.py:207
h = self.layers[i](h, freqs_cis, mask, batch_valid_length=batch_valid_length, block_tables=block_tables,
^
# 16 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama_transformer.py:507
h = self.attention(input_x, freqs_cis, mask, batch_valid_length, block_tables, slot_mapping)
^
# 17 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama_transformer.py:248
if self.qkv_concat:
# 18 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama_transformer.py:258
query = self.cast(self.wq(x), self.dtype) # dp, 1 -> dp, mp
^
# 19 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama_transformer.py:263
if self.use_past:
# 20 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama_transformer.py:264
freqs_cos, freqs_sin, _ = freqs_cis
# 21 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/models/llama/llama_transformer.py:265
context_layer = self.infer_attention(query, key, value, batch_valid_length, block_tables, slot_mapping,
^
# 22 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/modules/infer_attention.py:281
if self.use_rope_rotary_emb:
# 23 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/modules/infer_attention.py:283
freqs_cos = self.cast(freqs_cos, mstype.float16)
# 24 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/modules/infer_attention.py:289
if self.is_first_iteration:
# 25 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/modules/infer_attention.py:290
if self.input_layout == "BSH":
^
# 26 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/modules/infer_attention.py:291
context_layer = self.flash_attention(query, key, value, attn_mask, alibi_mask)
^
# 27 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/modules/infer_attention.py:290
if self.input_layout == "BSH":
^
# 28 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/modules/infer_attention.py:286
key_out = self.paged_attention_mgr(key, value, slot_mapping)
^
# 29 In file /home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/mindformers/modules/paged_attention_mgr.py:61
return self.reshape_and_cache(key, value, self.key_cache, self.value_cache, slot_mapping)
^
(See file '/home/jenkins0/MindFormers_Test/cases/llama2/70b/train/test_mf_llama2_70b_eval_squad_8p_0001/rank_0/om/analyze_fail.ir' for more details. Get instructions about `analyze_fail.ir` at https://www.mindspore.cn/search?inputValue=analyze_fail.ir)
走给谭纬城
Please assign maintainer to check this issue.
请为此issue分配处理人。
@zhangjie18
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。
感谢您的提问,您可以评论//mindspore-assistant更快获取帮助:
跟测试对齐:
ppl只能走use_past=false,不支持增量推理,非ppl那些生成式走use_past=true,走增量推理。
llama2-70B的分布式评测,需要加上predict_infer_layout的逻辑。
转给冯浩验证跟踪
验证结果:
2024-05-09 09:46:06,469 - mindformers[mindformers/generation/text_generator.py:886] - INFO - total time: 52.50571131706238 s; generated tokens: 1812 tokens; generate speed: 34.51053141739208 tokens/s
2024-05-09 09:46:06,470 - mindformers[mindformers/modules/block_tables.py:129] - INFO - Clear block table cache engines.
2024-05-09 09:46:06,471 - mindformers[mindformers/trainer/causal_language_modeling/causal_language_modeling.py:283] - INFO - Step[1/2067], cost time 52.5124s, every example cost time is 52.5124, generate speed: 34.5062 tokens/s, avg speed: 0.0000 tokens/s, remaining time: 0:00:00
Building prefix dict from the default dictionary ...
DEBUG:jieba:Building prefix dict from the default dictionary ...
Loading model from cache /tmp/jieba.cache
DEBUG:jieba:Loading model from cache /tmp/jieba.cache
回归版本:Ms:master_20240509061515_b6f9201324ff
MF:dev_20240509021521_8e97c2b1676e3b
回归步骤:参考issue步骤
基本问题:已解决
测试结论:回归通过
回归时间:2024.5.9
登录 后才可以发表评论