Snowmed OHDSI GIthub Vocabulary-v5.0 process question #5
Replies: 1 comment
-
I did not do this. Rather I downloaded all csv's files from the ATHENA download site. After creating the empty tables (and PK/FK) in an OMOP instance using the recommended DDL using for example: CREATE TABLE mladi_OMOP54.CONCEPT ( ALTER TABLE mladi_OMOP54.CONCEPT ADD CONSTRAINT xpk_CONCEPT PRIMARY KEY (concept_id);CREATE INDEX idx_concept_concept_id ON mladi_OMOP54.CONCEPT (concept_id ASC); -------------------------------------- and so forth for all vocab tables I simply imported the csv into the tables. In our case at Pitt, had to create an ICD-10 to SNOMED lookup table as all of our local diagnostic codes are ICD-10 and not SNOMED |
Beta Was this translation helpful? Give feedback.
-
Question:
I want to confirm that this SnowMed process is still needed to load the SnowMed vocabulary files from Athena before I have to figure out how to get an environment that is outside of my M1 laptop to configure the dependencies.
This github information below is 2 years old I wonder if it is still relevant.
https://github.com/OHDSI/Vocabulary-v5.0/tree/880fc17ce4ec9c14b7c3e1d9f5fa199c1a997aec/SNOMED
Dependencies wise... it looks like it needs a UMed account, a specific file download, and R for my laptop. (My M1 laptop has issues with R because of a RJava dependency afaik)
I sent this along to Marty for discussion this Thursday 8/3/2023 in the OHDSI CHoRUS B2AI Standards Module Office Hours
I'm willing to help create a Docker image with help from a SME (subject matter export) and/or port the data for other folks to use if we don't break any licenses or rules in doing that.
Best regards,
Heidi
Beta Was this translation helpful? Give feedback.
All reactions