From f5e2457da930a120a9d214a3c294c3e9de1329c9 Mon Sep 17 00:00:00 2001 From: Ziming Liu Date: Mon, 29 Apr 2024 12:36:51 -0400 Subject: [PATCH 1/5] Update README.md --- README.md | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/README.md b/README.md index 77c8ed5f..5bc4fc7b 100644 --- a/README.md +++ b/README.md @@ -25,6 +25,17 @@ pip install -e . pip install pykan ``` +Requirements + +```python +matplotlib==3.6.2 +numpy==1.24.4 +scikit_learn==1.1.3 +setuptools==65.5.0 +sympy==1.11.1 +torch==2.2.2 +tqdm==4.66.2 +``` To install requirements: ```python From 12c47d4bd690d64b2af55fedb1b7b2a68f9d98d1 Mon Sep 17 00:00:00 2001 From: Ziming Liu Date: Mon, 29 Apr 2024 12:40:56 -0400 Subject: [PATCH 2/5] Update README.md --- README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 5bc4fc7b..630cea27 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ # Kolmogorov-Arnold Newtworks (KANs) -This the github repo for the paper "KAN: Kolmogorov-Arnold Networks" [link]. The documentation can be found here [link]. +This the github repo for the paper "KAN: Kolmogorov-Arnold Networks" [link]. Find the [documentaion here](https://kindxiaoming.github.io/pykan/). Kolmogorov-Arnold Networks (KANs) are promising alternatives of Multi-Layer Perceptrons (MLPs). KANs have strong mathematical foundations just like MLPs: MLPs are based on the [universal approximation theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem), while KANs are based on [Kolmogorov-Arnold representation theorem](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Arnold_representation_theorem). KANs and MLPs are dual: KANs have activation functions on edges, while MLPs have activation functions on nodes. This simple change makes KANs better (sometimes much better!) than MLPs in terms of both model accuracy and interpretability. @@ -43,16 +43,16 @@ pip install -r requirements.txt ``` ## Documentation -The documenation can be found here []. +The documenation can be found [here](https://kindxiaoming.github.io/pykan/). ## Tutorials **Quickstart** -Get started with [hellokan.ipynb](./hellokan.ipynb) notebook +Get started with [hellokan.ipynb](./hellokan.ipynb) notebook. **More demos** -Jupyter Notebooks in [docs/Examples](./docs/Examples) and [docs/API_demo](./docs/API\_demo) are ready to play. You may also find these examples in documentation. +More Notebook tutorials can be found in [tutorials](tutorials). From 638d6a686d6647460f74697afc41ec4b8428d429 Mon Sep 17 00:00:00 2001 From: Ziming Liu Date: Mon, 29 Apr 2024 12:43:15 -0400 Subject: [PATCH 3/5] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 630cea27..1ea3f559 100644 --- a/README.md +++ b/README.md @@ -19,7 +19,7 @@ cd pykan pip install -e . ``` -**Installation via pypi (soon)** +**Installation via pypi** ```python pip install pykan From c1e1decc09718c0dc2a2232d6a93c8cbf1a3d38e Mon Sep 17 00:00:00 2001 From: Ziming Liu Date: Mon, 29 Apr 2024 12:43:43 -0400 Subject: [PATCH 4/5] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 1ea3f559..5e6c8f88 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ # Kolmogorov-Arnold Newtworks (KANs) -This the github repo for the paper "KAN: Kolmogorov-Arnold Networks" [link]. Find the [documentaion here](https://kindxiaoming.github.io/pykan/). +This the github repo for the paper "KAN: Kolmogorov-Arnold Networks" [link]. Find the documentation [here](https://kindxiaoming.github.io/pykan/). Kolmogorov-Arnold Networks (KANs) are promising alternatives of Multi-Layer Perceptrons (MLPs). KANs have strong mathematical foundations just like MLPs: MLPs are based on the [universal approximation theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem), while KANs are based on [Kolmogorov-Arnold representation theorem](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Arnold_representation_theorem). KANs and MLPs are dual: KANs have activation functions on edges, while MLPs have activation functions on nodes. This simple change makes KANs better (sometimes much better!) than MLPs in terms of both model accuracy and interpretability. From 844eac979bae02900a405ca778a34a491f4fabfb Mon Sep 17 00:00:00 2001 From: Ziming Liu Date: Mon, 29 Apr 2024 12:44:40 -0400 Subject: [PATCH 5/5] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 5e6c8f88..1cebe803 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,7 @@ This the github repo for the paper "KAN: Kolmogorov-Arnold Networks" [link]. Find the documentation [here](https://kindxiaoming.github.io/pykan/). -Kolmogorov-Arnold Networks (KANs) are promising alternatives of Multi-Layer Perceptrons (MLPs). KANs have strong mathematical foundations just like MLPs: MLPs are based on the [universal approximation theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem), while KANs are based on [Kolmogorov-Arnold representation theorem](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Arnold_representation_theorem). KANs and MLPs are dual: KANs have activation functions on edges, while MLPs have activation functions on nodes. This simple change makes KANs better (sometimes much better!) than MLPs in terms of both model accuracy and interpretability. +Kolmogorov-Arnold Networks (KANs) are promising alternatives of Multi-Layer Perceptrons (MLPs). KANs have strong mathematical foundations just like MLPs: MLPs are based on the universal approximation theorem, while KANs are based on Kolmogorov-Arnold representation theorem. KANs and MLPs are dual: KANs have activation functions on edges, while MLPs have activation functions on nodes. This simple change makes KANs better (sometimes much better!) than MLPs in terms of both model accuracy and interpretability. A quick intro of KANs [here](https://kindxiaoming.github.io/pykan/intro.html). mlp_kan_compare