这是用户在 2024-8-15 15:57 为 https://openreview.net/group?id=EMNLP/2024/Conference#tab-recent-activity 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
Add:
* denotes a required field
Title*

Title of paper. Add TeX formulas using the following formats: $In-line Formula$ or $$Block Formula$$.

Authors*

Search author profile by first, middle and last name or email address. All authors must have an OpenReview profile prior to submitting a paper.

Keywords*

Comma separated list of keywords.

TL;DR

"Too Long; Didn't Read": a short sentence describing your paper

Abstract*

Abstract of paper. Add TeX formulas using the following formats: $In-line Formula$ or $$Block Formula$$.

In recent years, Large Language Models (LLMs) have made significant strides towards Artificial General Intelligence. However, training these models from scratch requires substantial computational resources and vast amounts of text data. In this paper, we explores an alternative approach to constructing a LLM for a new language by continually pre-training (CPT) from existing pre-trained LLMs, instead of using randomly initialized parameters. Based on parallel experiments on 40 model sizes ranging from 40M to 5B parameters, we find that 1) CPT converges faster and saves significant resources in a scalable manner. 2) CPT adheres to an extended scaling law derived from with a joint data-parameter scaling term. 3) The compute-optimal data-parameter allocation for CPT markedly differs based on our estimated scaling factors. 4) The effectiveness of transfer scale is influenced by training duration and linguistic properties, while robust to data replaying, a method that effectively mitigates catastrophic forgetting in CPT. We hope our findings provide deeper insights into the transferability of LLMs at scale for the research community.

Characters remaining: 3855
Track*

Select the one research track that your submission best fits into.

Paper Link*

Submit the URL of your OpenReview submission to ARR where your reviews and meta-review are available. It should look like 'https://openreview.net/forum?id=abCD-EFgHI'

Response To Metareview

Optional comments to help the senior area chairs interpret your reviews and metareview (max 2000 characters).

Thank you for providing detailed and insightful feedback on our paper. We are very pleased to see the positive evaluation from the reviewers and to see the recognition of our contribution in the field of Cross-Lingual Continual Pre-Training (CPT).

In response to your suggested revisions, we reply as follows:

Tokenization and its Impact on CPT: In the revised version of our paper, we will discuss the tokenizer used in our experiments and consider potential impacts of vocabulary extension on CPT performance.

Color Scheme of Figure 1: Thank you for your feedback on the color scheme of Figure 1. We will reassess and improve the color scheme to enhance clarity.

We will carefully consider your valuable suggestions and implement these improvements in the revision process of our paper. Thank you again for your positive and constructive feedback.

Characters remaining: 1147
License*
CC BY 4.0
Readers*
EMNLP 2024 Conference
authorids
Signatures*
signatures

Edit History


Readers*
EMNLP 2024 Conference
authorids
Signatures*
Wenzhen Zheng
    • No recent activity to display.