Skip to content
View jinghanjia's full-sized avatar

Highlights

  • Pro

Block or report jinghanjia

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
jinghanjia/README.md

Hi there 👋

This is Jinghan Jia!

Github Linkedin Gmail

Welcome to my Github page! I am Jinghan and I am currently finishing my Computer Engineering PhD degree at the Michigan State University!

img

💻 Programming languages and tools:

Pinned Loading

  1. sayakpaul/robustness-foundation-models sayakpaul/robustness-foundation-models Public

    This repository holds code and other relevant files for the NeurIPS 2022 tutorial: Foundational Robustness of Foundation Models.

    Jupyter Notebook 70 5

  2. OPTML-Group/Unlearn-Sparse OPTML-Group/Unlearn-Sparse Public

    [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu

    Python 65 7

  3. OPTML-Group/CLAW-SAT OPTML-Group/CLAW-SAT Public

    [SANER 2023] CLAWSAT: Towards Both Robust and Accurate Code Models.

    Python 5 1

  4. OPTML-Group/Diffusion-MU-Attack OPTML-Group/Diffusion-MU-Attack Public

    The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now". This work introduces one fast and e…

    Python 60 3

  5. OPTML-Group/SOUL OPTML-Group/SOUL Public

    Official repo for EMNLP'24 paper "SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning"

    Python 15 2

  6. OPTML-Group/WAGLE OPTML-Group/WAGLE Public

    Official repo for NeurIPS'24 paper "WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models"

    Python 10 3