这是用户在 2025-7-6 17:40 为 https://openreview.net/forum?id=8YX9mZhmLf¬eId=JUwCYl6YQt 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?

Adaptive Multimodal Unlearning
自适应多模态去学习

Bin Wang, Shenjing Wu, Xin Wang, Jinyu Cong, Pingping Wang, Xin Gao, Zhongyi Han, Benzheg WEI
王斌、吴申静、王鑫、丛金宇、王萍萍、高鑫、韩中一、魏本哲

02 Apr 2025 (modified: 05 Jul 2025)Submitted to ACMMM 2025Conference, Senior Area Chairs, Area Chairs, Reviewers, AuthorsRevisionsBibTeXCC BY 4.0
2025 年 4 月 2 日(修改于:2025 年 7 月 5 日)已提交至 ACMMM 2025 会议、高级领域主席、领域主席、审稿人、作者修订 BibTeX CC BY 4.0
Abstract:  抽象的:

With given the increasing prominence of privacy concerns, Machine Unlearning (MU) techniques have become crucial. While existing unlearning methods have achieved great success in unimodal settings, achieving efficient removal of private data in multimodal learning still faces significant challenges: 1) it is hard to precisely remove specific data influences within complex, high-dimensional fused multimodal representations; 2) cross-modal information residue and interference, where information from one modality can easily leak into others. To this end, we propose the Adaptive Multimodal Unlearning (AMU) framework, designed to address the two aforementioned challenges, which comprises two main components: the Multi-Objective Unlearning (MOU) module and the Adversarial Feature Adaptation (AFA) module. We design a masked gradient update strategy that selectively modifies gradients based on estimated parameter influence with respect to the forget set. This enables the precise targeting and forgetting of fused features corresponding to the specific multimodal data intended for unlearning. Simultaneously, we introduce an adversarial feature adaptation mechanism to neutralize specific representational dependencies between modalities, thereby resolving the issue of cross-modal information residue. We constructed a benchmark that includes six datasets and two forgetting scenarios. Extensive experiments demonstrate that AMU can effectively remove target information while maintaining high model fidelity on the remaining data, outperforming existing baseline methods.
随着隐私问题日益突出,机器去学习 (MU) 技术已变得至关重要。虽然现有的去学习方法在单模态设置中取得了巨大成功,但在多模态学习中实现隐私数据的有效去除仍然面临重大挑战:1)难以精确去除复杂、高维融合多模态表示中的特定数据影响;2)跨模态信息残留和干扰,一种模态的信息很容易泄露到其他模态中。为此,我们提出了自适应多模态去学习 (AMU) 框架,旨在解决上述两个挑战,该框架包含两个主要组件:多目标去学习 (MOU) 模块和对抗特征自适应 (AFA) 模块。我们设计了一种掩蔽梯度更新策略,该策略根据估计的参数对遗忘集的影响选择性地修改梯度。这使得能够精确定位和遗忘与要去学习的特定多模态数据相对应的融合特征。同时,我们引入了一种对抗性特征自适应机制,以中和模态间特定的表征依赖关系,从而解决跨模态信息残留问题。我们构建了一个包含六个数据集和两种遗忘场景的基准测试。大量实验表明,AMU 能够有效地去除目标信息,同时在剩余数据上保持较高的模型保真度,其性能优于现有的基线方法。

Keywords: Machine Unlearning, Multimodal Learning, Multi-Label Classification, Privacy Preservation
关键词:机器去学习,多模态学习,多标签分类,隐私保护
Primary Subject Area: [Content] Multimodal Fusion
主要学科领域:[内容] 多模态融合
Secondary Subject Area: [Experience] Multimedia Applications
次要学科领域:[经验]多媒体应用
Submission Guide: Yes  投稿指南:是
Open Discussion: Yes  公开讨论:是
Author Registration Confirmation: Yes
作者注册确认:是
Reviewer Participation: Yes
审稿人参与:是
Supplementary Material: zip
补充材料:zip
Submission Number: 3290  投稿编号:3290
8 / 8 replies shown  显示 8 / 8 条回复
Add:  添加:

Paper Decision  论文决定

Decisionby Program Chairs05 Jul 2025, 04:15 (modified: 05 Jul 2025, 05:53)Program Chairs, Senior Area Chairs, Area Chairs, Reviewers, AuthorsRevisions
程序主席决定 2025 年 7 月 5 日 04:15(修改于:2025 年 7 月 5 日 05:53)程序主席、高级领域主席、领域主席、审阅者、作者修订
Decision: Reject  决定:拒绝

Rebuttal by Authors  作者的反驳

Rebuttalby Authors (Bin Wang, Shenjing Wu, Xin Wang, Pingping Wang, +4 more)19 Jun 2025, 14:18Program Chairs, Senior Area Chairs, Area Chairs, Reviewers Submitted, Authors
作者反驳 ( 王斌、吴申静、王欣、王萍萍等 4 人)2025 年 6 月 19 日 14:18 程序主席、高级领域主席、领域主席、已提交审稿人、作者
Rebuttal:  反驳:

We appreciate all reviewers' careful reading and insightful comments. We are very grateful to the overall positive assessment of the novelty and significance of our work. In what follows, we address the reviewers' concerns and questions.
我们感谢所有审稿人的认真阅读和富有洞察力的评论。我们非常感谢大家对我们工作的创新性和重要性给予的总体积极评价。接下来,我们将解答审稿人提出的疑虑和问题。

Question 1: The text mentions cross-modal information leakage, but unimodal unlearning methods already exist. Why are multimodal solutions still valuable in practice?
问题 1:文中提到了跨模态信息泄漏,但单模态去学习方法已经存在。为什么多模态解决方案在实践中仍然有价值?

Unimodal unlearning methods have fundamental shortcomings in multimodal scenarios, failing to address two key challenges: (1) Removing from Fused Representations: Multimodal models entangle information from sources like images and text into fused representations. Unimodal methods, not designed for such complexity, struggle to precisely remove data without causing collateral damage or incomplete unlearning. (2) Eliminating 'Cross-Modal Information Leakage': Information leaks from one modality and is stored in another's representations. This residual data compromises a user's 'right to be forgotten'. Our AMU framework, with its Adversarial Feature Adaptation module, is designed to eliminate these cross-modal dependencies. Our work addresses multimodal data unlearning, a critical and underexplored area for developing trustworthy, secure, and responsible AI.
单模态反学习方法在多模态场景中存在根本缺陷,无法解决两个关键挑战:(1) 从融合表征中移除:多模态模型将来自图像和文本等来源的信息纠缠到融合表征中。单模态方法并非为应对这种复杂性而设计,难以在不造成附带损害或不完全反学习的情况下精确移除数据。(2) 消除“跨模态信息泄漏”:信息从一种模态泄漏到另一种模态的表征中。这些残留数据损害了用户的“被遗忘权”。我们的 AMU 框架及其对抗性特征自适应模块旨在消除这些跨模态依赖关系。我们的工作致力于解决多模态数据反学习问题,这是开发值得信赖、安全且负责任的人工智能的关键且尚未得到充分探索的领域。

Question 2: What are the fundamental differences in concept and mechanism between AMU and 'parameter selection and zeroing-out' methods like GradSelect? Where do its innovations lie?
问题 2:AMU 与 GradSelect 等“参数选择与归零”方法在理念和机制上有哪些根本区别?它的创新之处在哪里?

AMU and parameter zeroing-out methods differ fundamentally in their approach and mechanism. (1) Parameter zeroing-out methods (like GradSelect) identify and remove parameters relevant to the forgetting data, an invasive approach that directly alters the model. We propose a new AMU paradigm of 'guided relearning and feature space reshaping.' Without discarding parameters, its core innovations are: (1) Gradient Modulation: Instead of removing weights, AMU guides the retraining process by modulating gradients. This enables the model to 'learn' how to forget, allowing for more precise, less destructive adjustments. (2) Adversarial Feature Adaptation: A key innovation for multimodal challenges, this uses adversarial training to reshape the feature space, eliminating cross-modal dependencies tied to the forgotten samples.
AMU 和参数归零方法在方法和机制上有着根本的不同。(1) 参数归零方法(例如 GradSelect)会识别并移除与遗忘数据相关的参数,这是一种直接改变模型的侵入式方法。我们提出了一种新的 AMU 范式,即“引导式再学习和特征空间重塑”。在不丢弃参数的情况下,其核心创新在于:(1) 梯度调制:AMU 不是移除权重,而是通过调制梯度来引导再训练过程。这使得模型能够“学习”如何遗忘,从而实现更精确、更少破坏性的调整。(2) 对抗性特征自适应:这是多模态挑战的一项关键创新,它使用对抗性训练来重塑特征空间,消除与遗忘样本相关的跨模态依赖关系。

Meta Review of Submission3290 by Area Chair 1oxX
区域主席 1oxX 对提交的 3290 号提案进行元评审

Meta Reviewby Area Chair 1oxX06 Jun 2025, 22:04 (modified: 05 Jul 2025, 05:11)Senior Area Chairs, Area Chairs, Authors, Reviewers, Program ChairsRevisions
领域主席 1oxX 元审阅 2025 年 6 月 6 日 22:04(修改于:2025 年 7 月 5 日 05:11)高级领域主席、领域主席、作者、审阅者、程序主席 修订
Metareview:  元评论:

This work studies multimodal data unlearning, which is a critical yet relatively under-explored field. However, reviewers pointed out that the designed strategy seems to overlap with some previous works that also select important parameters through importance score and then only update these parameters. Some important related works are also not discussed. The motivation of the application scenario also needs to be strengthened.
这项工作研究了多模态数据去学习,这是一个关键但相对较少探索的领域。然而,审稿人指出,其设计的策略似乎与一些先前的研究重叠,这些研究也通过重要性得分来选择重要参数,然后仅更新这些参数。一些重要的相关研究也未得到讨论。应用场景的动机也需要加强。

Recommendation: Reject  建议:拒绝
Confidence: 5: The area chair is absolutely certain
信心:5:区域主席绝对确定

Comments  评论

Official Reviewby Reviewer LM9q26 May 2025, 10:42 (modified: 05 Jul 2025, 05:10)Program Chairs, Senior Area Chairs, Area Chairs, Reviewers, AuthorsRevisions
官方审阅者 LM9q 2025 年 5 月 26 日 10:42(修改于:2025 年 7 月 5 日 05:10)程序主席、高级领域主席、领域主席、审阅者、作者修订
Review:  审查:

This paper proposes AMU, which is a multimodal machine unlearning framework. AMU selectively modifies gradients based on estimated parameter influence on the forgetting set and adversarial feature adaptation mechanisms to achieve multimodal machine unlearning.
本文提出了一个多模态机器去学习框架 AMU。AMU 基于估计参数对遗忘集的影响以及对抗性特征自适应机制选择性地修改梯度,从而实现多模态机器去学习。

Strength:   力量:

  1. This work investigates multimodal data unlearning, a critical but relatively underexplored field.
    这项工作研究了多模态数据去学习,这是一个关键但相对未被充分探索的领域。
  2. Design a masked gradient update strategy that selectively modifies gradients based on estimated parameter influence with respect to the forget set.
    设计一个掩蔽梯度更新策略,根据对遗忘集估计的参数影响有选择地修改梯度。

Weakness:  弱点:

  1. The AFA module requires additional training of a discriminator, which incurs a certain amount of time overhead, but there is no unlearning time metric for subsequent experiments.
    AFA 模块需要对鉴别器进行额外的训练,这会产生一定的时间开销,但后续实验没有取消学习时间指标。
  2. What do L_DAand L_C in the AFA module in Figure 1 represent?
    图 1 中 AFA 模块中的 L_DA 和 L_C 分别代表什么?
  3. The AFA module did not provide a good explanation on how to solve the second challenge mentioned in the paper.
    AFA 模块并没有对如何解决论文中提到的第二个挑战提供很好的解释。
  4. Many hyperparameters are introduced, which may lead to deficiencies in the method's stability and transferability. I suggest the authors add the corresponding ablation experiments.
    引入了过多的超参数,可能导致方法的稳定性和迁移性不足。建议作者添加相应的消融实验。
  5. What does T in formula (5) represent?
    公式(5)中的 T 代表什么?
  6. Could AMU be applied to other modalities? Such as, multimodal image and text.
    AMU 能应用于其他模态吗?比如多模态图像和文本。
Rating: 5: Marginally below acceptance threshold
评分:5:略低于接受门槛
Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
置信度:4:审阅者有信心但不能绝对确定评估是否正确
Fit: Relevant to a very small part of the community
适合:与社区中很小一部分人相关
Fit Justification:  适合理由:

This paper focuses on the multimodal unlearning.
本文重点关注多模式去学习。

Technical Quality: 7  技术质量:7
Presentation Quality: 7  演示质量:7
Rebuttal Questions:  反驳问题:

Please see weaknesses in the above comments.
请参阅以上评论中的弱点。

Additional Comments:  补充评论:

NA  

Machine unlearning for medical field
医疗领域的机器学习

Official Reviewby Reviewer rTpD19 May 2025, 15:09 (modified: 05 Jul 2025, 05:10)Program Chairs, Senior Area Chairs, Area Chairs, Reviewers, AuthorsRevisions
官方审阅者 rTpD 2025 年 5 月 19 日 15:09(修改于:2025 年 7 月 5 日 05:10)程序主席、高级领域主席、领域主席、审阅者、作者修订
Review:  审查:

Strengths: The proposed method surpasses baselines. The experimental results seems sufficient.
优点:提出的方法超越了基线。实验结果似乎足够充分。

Weaknesses:  弱点:

  1. This paper seems to be specifically designed for the medical field, as the experiments are all conducted on medical datasets. However, the title, abstract and contribution do not mention the medical field. This problem makes me harder to understand this paper.
    这篇论文似乎是专门针对医学领域的,因为所有实验都是在医学数据集上进行的。然而,标题、摘要和贡献都没有提到医学领域。这个问题让我很难理解这篇论文。
  2. Why investigating machine unlearning in the medical field? Is there any special demand for unlearning in medical scenario? This problem remains undiscussed.
    为什么要在医疗领域研究机器学习的反学习?医疗场景对反学习有什么特殊需求吗?这个问题目前还没有得到讨论。
  3. Since the authors did not directly claim that the method is designed for medical scenarios, I wonder if the proposed method can work in natural scenarios.
    由于作者没有直接声称该方法是为医疗场景设计的,因此我想知道所提出的方法是否可以在自然场景中发挥作用。
  4. The paper clarity should be improved. For example, in Figure 1, I can not understand LDA and LC. Furthermore, the Adversarial Learning is not reflected in Figure 1, which makes me harder to understand the proposed method.
    论文的清晰度有待提高。例如,在图 1 中,我无法理解 LDALC 。此外,对抗学习并没有在图 1 中体现出来,这让我更难理解所提出的方法。
Rating: 4: Ok but not good enough - rejection
评分:4:还行,但不够好 - 拒绝
Confidence: 3: The reviewer is fairly confident that the evaluation is correct
置信度:3:审阅者相当有信心评估是正确的
Fit: Relevant to a very small part of the community
适合:与社区中很小一部分人相关
Fit Justification:  适合理由:

Machine unlearning for medical field is not a widely interested topic.
医学领域的机器学习并不是一个被广泛关注的话题。

Technical Quality: 6  技术质量:6
Presentation Quality: 4  演示质量:4
Rebuttal Questions:  反驳问题:

Please see weaknesses  请查看弱点

Additional Comments:  补充评论:

I have no additional comments.
我没有其他评论。

Review for Submission3290
提交审核 3290

Official Reviewby Reviewer NCpU17 May 2025, 19:01 (modified: 05 Jul 2025, 05:10)Program Chairs, Senior Area Chairs, Area Chairs, Reviewers, AuthorsRevisions
官方审阅者 NCpU 2025 年 5 月 17 日 19:01(修改于 2025 年 7 月 5 日 05:10)程序主席、高级领域主席、领域主席、审阅者、作者修订
Review:  审查:

This paper tackles the critical problem of machine unlearning in multimodal settings, focusing on the challenges of precisely removing specific data influences within fused representations and mitigating cross-modal information leakage. The proposed Adaptive Multimodal Unlearning (AMU) framework addresses these issues with two novel components. The Multi-Objective Unlearning (MOU) module employs a multi-objective learning approach with two teacher-student models. The paper provides extensive experimental results, comparing against a significant number of methods and presenting results across various metrics. Detailed supplementary material is also included.
本文探讨了多模态环境下机器学习的关键问题,重点关注如何在融合表征中精准移除特定数据的影响,并减少跨模态信息泄露。本文提出的自适应多模态学习 (AMU) 框架通过两个新颖的组件解决了这些问题。多目标学习 (MOU) 模块采用多目标学习方法,包含两个师生模型。本文提供了大量的实验结果,与大量方法进行了比较,并展示了基于各种指标的结果。此外,本文还包含详细的补充材料。

Rating: 4: Ok but not good enough - rejection
评分:4:还行,但不够好 - 拒绝
Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
置信度:4:审阅者有信心但不能绝对确定评估是否正确
Fit: Very clearly a multimedia paper, of interest to a large part of the community
适合:显然是一篇多媒体论文,引起了社区很大一部分人的兴趣
Fit Justification:  适合理由:

The topic is just talking about multimodal learning methods.
该主题只是谈论多模态学习方法。

Your rebuttal does not well address my concerns. I have revised my original score. Thanks.
你的反驳没有很好地解决我的担忧。我已经修改了我的原始评分。谢谢。

Technical Quality: 6  技术质量:6
Presentation Quality: 7  演示质量:7
Rebuttal Questions:  反驳问题:
  1. In Equation 4, the definition of ​ is not explicitly given as a formula. If it refers to a loss function from a classical method and is omitted due to space constraints, please indicate this with a citation.
    在公式 4 中,​ 的定义并未明确给出。如果它引用了经典方法中的损失函数,并且由于篇幅限制而省略,请注明出处。

  2. The presentation of Equation 10 seems abrupt. The derivation of the inequality should be included. If the derivation is placed in the Supplementary Material due to space limitations, please add a corresponding note in the main text where Equation 10 is introduced.
    公式 10 的呈现方式略显突兀。应包含不等式的推导过程。如果由于篇幅限制,推导过程需放在补充材料中,请在正文中公式 10 出现的位置添加相应的注释。

  3. Given the large number of evaluation metrics in Table 1 and Table 2, and the fact that different metrics have different evaluation directions (higher is better vs. lower is better), it would be beneficial to add arrows within the table entries to indicate this.
    鉴于表 1 和表 2 中的评估指标数量众多,并且不同指标具有不同的评估方向(越高越好,越低越好),因此在表格条目中添加箭头来指示这一点会很有帮助。

  4. In Figure 3, subplot A, the "Performance Stability Index" metric is not explicitly labeled with a marker in the graph. At first glance, it is difficult to understand and could easily be mistaken for the label of the other two metrics. Additionally, for better readability, please label the y-axis of subplots C and D in Figure 3 to clarify what they represent.
    在图 3 的子图 A 中,“性能稳定性指数”指标未在图中明确标注。乍一看,它难以理解,并且很容易与其他两个指标的标签混淆。此外,为了提高可读性,请标注图 3 中子图 C 和 D 的 y 轴,以明确它们的含义。

  5. According to lines 570, 571, and 581, the p-value in Table 1 should ideally be lower. The AMU does not achieve the optimal p-value on the PAD-UFES and AMD datasets, and therefore, it should not be highlighted in bold for these entries. Even if AMU demonstrates a better overall trade-off between data privacy protection and model performance, the individual metrics in the table should be presented objectively. Please double-check and confirm the values and highlighting in Table 2 as well
    根据第 570、571 和 581 行,表 1 中的 p 值理想情况下应该更低。AMU 在 PAD-UFES 和 AMD 数据集上未达到最优 p 值,因此不应在这些条目中以粗体突出显示。即使 AMU 在数据隐私保护和模型性能之间表现出更好的整体平衡,表中的各个指标也应客观呈现。请仔细检查并确认表 2 中的值和突出显示。

Additional Comments:  补充评论:
  1. The presentation of tables and figures has some issues, and the readability could be improved. For example, some figure axes lack labels, and the tables do not specify whether higher or lower values are better for the reported metrics. Additionally, the highlighting of table entries is misleading (certain metrics are bolded even though they do not represent the best performance for that individual metric).
    表格和图表的呈现方式存在一些问题,可读性有待提升。例如,一些图表的坐标轴缺少标签,并且表格没有明确说明报告的指标值是更高还是更低更优。此外,表格条目的突出显示方式也存在误导性(某些指标被加粗,即使它们并非该指标的最佳表现)。
  2. The paper lacks visualization results to demonstrate the effect of the proposed method.
    论文缺乏可视化结果来证明所提方法的效果。
  3. The paper provides limited discussion on model details, such as the specific models used and hyperparameter settings, which could impact reproducibility.
    本文对模型细节进行了有限的讨论,例如所使用的具体模型和超参数设置,这些可能会影响可重复性。

Review of Adaptive Multimodal Unlearning
自适应多模态去学习回顾

Official Reviewby Reviewer uoAa17 May 2025, 12:44 (modified: 05 Jul 2025, 05:10)Program Chairs, Senior Area Chairs, Area Chairs, Reviewers, AuthorsRevisions
官方审阅者 uoAa 2025 年 5 月 17 日 12:44(修改于:2025 年 7 月 5 日 05:10)程序主席、高级领域主席、领域主席、审阅者、作者修订
Review:  审查:

Pros:1.This paper explores the solutions of two challenges in multimodal unlearning. The authors propose two models to respectively solve the challenges.
优点:1. 本文探讨了多模态去学习中两个挑战的解决方案。作者提出了两个模型来分别解决这两个挑战。

  1. This paper presents extensive experiments to evaluate various datasets. The experimental results show the effectiveness of proposed method.
    本文对各种数据集进行了大量的实验评估。实验结果证明了所提方法的有效性。

Cons:1. The center mechanism of MOU is the gradient masking. However, the strategy seems too overlap with many other works which also select the important parameters through importance score and then only update these parameters. I did not find any difference with existing methods.
缺点:1. MOU 的核心机制是梯度掩膜。然而,该策略似乎与许多其他研究过于相似,这些研究也通过重要性得分来选择重要参数,然后仅更新这些参数。我没有发现与现有方法有任何区别。

  1. Insufficient related work. Author states 'To the best of our knowledge, only one work Multidelete has explored multimodal unlearning.' However, SIU [1]also studies multimodal machine unlearning.
    相关研究不足。作者指出:“据我们所知,只有一篇论文(Multidelete)探讨了多模态反学习。” 然而,SIU [1] 也研究了多模态机器反学习。

  2. It is unclear how is the classifier in line 433 trained. And what is Lc and LDA in figure 1?
    不清楚第 433 行的分类器是如何训练的。图 1 中的 LcLDA 又是什么?

[1] Li, Jiaqi, et al. "Single image unlearning: Efficient machine unlearning in multimodal large language models." Advances in Neural Information Processing Systems 37 (2024): 35414-35453.
[1] 李嘉琪等人。“单图像反学习:多模态大型语言模型中的高效机器反学习。”神经信息处理系统进展,37(2024):35414-35453。

Rating: 5: Marginally below acceptance threshold
评分:5:略低于接受门槛
Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
置信度:4:审阅者有信心但不能绝对确定评估是否正确
Fit: Very clearly a multimedia paper, of interest to a large part of the community
适合:显然是一篇多媒体论文,引起了社区很大一部分人的兴趣
Fit Justification:  适合理由:

This paper explore the machine unlearning in multimodal learning, which fits the need of MM community.
本文探讨了多模态学习中的机器学习,以满足 MM 社区的需求。

Technical Quality: 6  技术质量:6
Presentation Quality: 6  演示质量:6
Rebuttal Questions:  反驳问题:

Please see the cons of weakness.
请参阅弱点的缺点。

Additional Comments:  补充评论:

The writing style of the first paragraph is confusing. There is no need to separate the sentences into 3 points.
第一段的写作风格令人困惑。没有必要将句子分成 3 个要点。

Adaptive Multimodal Unlearning
自适应多模态去学习

Official Reviewby Reviewer jDwE13 May 2025, 18:59 (modified: 05 Jul 2025, 05:10)Program Chairs, Senior Area Chairs, Area Chairs, Reviewers, AuthorsRevisions
官方审阅者 jDwE 2025 年 5 月 13 日 18:59(修改于 2025 年 7 月 5 日 05:10)程序主席、高级领域主席、领域主席、审阅者、作者修订
Review:  审查:

This paper presents a machine unlearning method, named AMU, designed for multimodal settings. The proposed approach consists of two main components (MOU and AFS), which address the problems of precise data removal and cross-modal interference, respectively. The method was compared with 12 baseline methods across 6 datasets (EMOTIONCLASS, PathMNIST, PAD-UFES, ASD, ECG, and AMD) covering both multimodal and unimodal applications. Improvements in accuracy, MIA risk score, p-value are reported. Experimental results show that AMU outperforms the baseline methods in both unlearning scenarios: batch data forgetting and single-entry data forgetting. On the PAD-UFES dataset, the paper reports the highest accuracy score of 85.33% and 83.00% for batch data forgetting and single-entry data forgetting, respectively.
本文提出了一种针对多模态设置的机器学习去学习方法 AMU。该方法由两个主要部分(MOU 和 AFS)组成,分别解决了精确数据移除和跨模态干扰的问题。本文在 6 个数据集(EMOTIONCLASS、PathMNIST、PAD-UFES、ASD、ECG 和 AMD)上与 12 种基准方法进行了比较,涵盖了多模态和单模态应用。结果报告了准确率、MIA 风险评分和 p 值的提升。实验结果表明,AMU 在批量数据遗忘和单项数据遗忘两种去学习场景中均优于基准方法。在 PAD-UFES 数据集上,本文报告了批量数据遗忘和单项数据遗忘的最高准确率,分别为 85.33% 和 83.00%。

Rating: 4: Ok but not good enough - rejection
评分:4:还行,但不够好 - 拒绝
Confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
置信度:5:审稿人绝对确信评估正确,并且非常熟悉相关文献
Fit: Relevant to a part of the community
契合:与社区的一部分相关
Fit Justification:  适合理由:

This topic is relevant to the community.
此主题与社区相关。

Technical Quality: 4  技术质量:4
Presentation Quality: 7  演示质量:7
Rebuttal Questions:  反驳问题:

Overall, the proposed method is presented in detail, and the paper is quite easy to follow. However, it could be improved in the following aspects. 1. In Figure 1, it is unclear how the two losses, L_{DA} and L_{C}, for the AFA module are computed. 2. Please clarify how to preprocess multimodal data. While the data processing step is mentioned in Figure 1, it is not described in detail within the paper. 3. Please justify how the hyperparameter \lamba_{AFA} is selected to balance the contribution of the AFA component relative to the MOU. Additional ablation studies are expected to support this selection. 4. The list of contributions could be elaborated further. For instance, the first contribution is too generic and not convincing. 5. The visualization in Figure 2 is too small and hard-to-read. 6. Since Multidelete [6] is specifically designed for multimodal unlearning, a direct comparison between the proposed method (AMU) and this approach would be beneficial.
总体而言,本文详细阐述了所提出的方法,且通俗易懂。然而,本文仍可在以下方面改进。1. 图 1 中,AFA 模块的两个损失 L_{DA} 和 L_{C} 的计算方式尚不明确。2. 请阐明如何预处理多模态数据。虽然图 1 中提到了数据处理步骤,但本文并未对其进行详细描述。3. 请解释如何选择超参数 \lamba_{AFA} 来平衡 AFA 组件相对于 MOU 的贡献。预计后续的消融研究将支持这一选择。4. 贡献列表可以进一步阐述。例如,第一个贡献过于笼统,缺乏说服力。5. 图 2 中的可视化效果太小,难以理解。6. 由于 Multidelete [6] 专为多模态去学习而设计,因此将本文所提出的方法 (AMU) 与本文方法进行直接比较将大有裨益。

Additional Comments:  补充评论:

N/A  不适用