Towards Exact Computation of Inductive Bias (IJCAI 2024)
-
Updated
Jul 1, 2024 - Python
Towards Exact Computation of Inductive Bias (IJCAI 2024)
This work provides extensive empirical results on training LMs to count. We find that while traditional RNNs trivially achieve inductive counting, Transformers have to rely on positional embeddings to count out-of-domain. Modern RNNs (e.g. rwkv, mamba) also largely underperform traditional RNNs in generalizing counting inductively.
Utility repository for the processing and visualizing NADs of arbitrary PyTorch models
This is the official code for CoLLAs 2022 paper, "InBiaseD: Inductive Bias Distillation to Improve Generalization and Robustness through Shape-awareness"
Implementation code of GKD: Semi-supervised Graph Knowledge Distillation for Graph-Independent Inference accepted by Medical Image Computing and Computer Assisted Interventions (MICCAI 2021)
A non-exhaustive collection of vision transformer models implemented in TensorFlow.
An Information Extraction Study: Take In Mind the Tokenization! (official repository of the paper)
Github code for the paper Maximum Class Separation as Inductive Bias in One Matrix. Arxiv link: https://arxiv.org/abs/2206.08704
Source code for the "Computationally Tractable Riemannian Manifolds for Graph Embeddings" paper
[CogSci'21] Study of human inductive biases in CNNs and Transformers.
Emergent Communication Pretraining for Few-Shot Machine Translation
Code for "Learning Inductive Biases with Simple Neural Networks" (Feinman & Lake, 2018).
Includes PyTorch -> Keras model porting code for DeiT models with fine-tuning and inference notebooks.
Add a description, image, and links to the inductive-biases topic page so that developers can more easily learn about it.
To associate your repository with the inductive-biases topic, visit your repo's landing page and select "manage topics."