You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While trying to tokenize sequences from a multi-fasta-list, the generated input using the prot-BERT tokenizer always generates the same numpy arrays, eventhough the initial sequences differ significantly. Anyone else ever faced such problems? Looking forward for some input.
With the submitted sequence list looking like this "['IISACLAGEKCRYTGDGFDYPALRKLVEEGKAIPVCPEVLGGLSVPRDPNEIIGGNGFDVLDGKAKVLTNRGVDTTAAFVKGAAEVLAIAQKKGARVAVLKERSPSCGSTMIYDGTFSGRRIPGCGCTAALLVKEGIRVFSEEN', 'RLLLIDGNSIAFRSFFALQNSLSRFTNADGLHTNAIYGFNKMLDIILDNVNPTDALVAFDAGKTTFRTKMYTNYKGGRAKTPSELTEQMPYLRDLLTGYGIKSYEL...]"
While trying to tokenize sequences from a multi-fasta-list, the generated input using the prot-BERT tokenizer always generates the same numpy arrays, eventhough the initial sequences differ significantly. Anyone else ever faced such problems? Looking forward for some input.
With the submitted sequence list looking like this "['IISACLAGEKCRYTGDGFDYPALRKLVEEGKAIPVCPEVLGGLSVPRDPNEIIGGNGFDVLDGKAKVLTNRGVDTTAAFVKGAAEVLAIAQKKGARVAVLKERSPSCGSTMIYDGTFSGRRIPGCGCTAALLVKEGIRVFSEEN', 'RLLLIDGNSIAFRSFFALQNSLSRFTNADGLHTNAIYGFNKMLDIILDNVNPTDALVAFDAGKTTFRTKMYTNYKGGRAKTPSELTEQMPYLRDLLTGYGIKSYEL...]"
the output arrays look like this [array([[-0.09921601, 0.05850809, -0.0922595 , ..., -0.00792149,
-0.04542159, 0.07880748]], dtype=float32), array([[-0.09921601, 0.05850809, -0.0922595 , ..., -0.00792149,
-0.04542159, 0.07880748]], dtype=float32), array([[-0.09921601, 0.05850809, -0.0922595 , ..., -0.00792149,
-0.04542159, 0.07880748]], dtype=float32), array([[-0.09921601, 0.05850809, -0.0922595 , ..., -0.00792149,
-0.04542159, 0.07880748]], dtype=float32), array([[-0.09921601, 0.05850809, -0.0922595 , ..., -0.00792149,
-0.04542159, 0.07880748]], dtype=float32)]
def EMBED_SEQUENCE(QUERY_SEQUENCES, TOKENIZER, MODEL):
EMBEDDINGS = []
MODEL.eval()
for SEQ in QUERY_SEQUENCES:
INPUTS = TOKENIZER(SEQ, return_tensors="pt", padding=True, truncation=True, max_length=1024)
with torch.no_grad():
OUTPUTS = MODEL(**INPUTS)
EMBEDDING = OUTPUTS.last_hidden_state.mean(dim=1).cpu().numpy()
EMBEDDINGS.append(EMBEDDING)
return EMBEDDINGS
QUERIES = EMBED_SEQUENCE(QUERY_SEQUENCES, TOKENIZER, MODEL)
The text was updated successfully, but these errors were encountered: