Google Colaboratory Python Utilities
In order to use this package in your Colaboratory Notebook, include this piece of code in the top of your notebook:
!pip install --upgrade -q colabutils
You can use this method to search the current user Google Drive for a specific file and download it to your environment local path.
If you plan to use this method to load your GCP credentials from a Google Drive file, check the method gcp.load_credentials()
.
Example:
from colabutils import gdrive
credential_path = gdrive.search_and_download 'credential.json', '/content/.google/credential.json'
Note: the file doesn't need to be on the current user My Drive
section or even the user doesn't need to be the file owner. If the file is owned by another user, but was shared with the current user, it will work all the same.
So, with the example above you could do something like that:
# authenticates the colab environment with current user's credentials
from google.colab import auth
auth.authenticate_user()
# download GCP API credentials from Google Drive
from colabutils import gdrive
credential_path = gdrive.search_and_download 'credential.json', '/content/.google/credential.json'
# load credentials
from google.oauth2 import service_account
creds = service_account.Credentials.from_service_account_file(credential_path)
# use credentials to prepare a GCP service client
from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
client = language.LanguageServiceClient(credentials=creds)
# make a call to the NLP API
text = u'I love python!'
document = types.Document(content=text,type=enums.Document.Type.PLAIN_TEXT)
# Detects the sentiment of the text
sentiment = client.analyze_sentiment(document=document).document_sentiment
print('Text: {}'.format(text))
print('Sentiment: {}, {}'.format(sentiment.score, sentiment.magnitude))
# output:
# Text: I love python!
# Sentiment: 0.8999999761581421, 0.8999999761581421
You can use this method to search the current user Google Drive for a specific file, download it to your environment local path, unzip it's contents and automatically remove the downloaded zip file.
from colabutils import gdrive
extracted_path = gdrive.download_and_unzip("books_dataset.zip", "/content")
# lets see its contents
!ls /content
Takes a photo using the webcam and saves it in the environment local path. Depends on the user allowing the browser to access the camera. Returns the image content (from a .read()
on that file).
If no filename parameter is provided, default file name used in the process is photo.jpg.
Example:
from colabutils import webcam
image_content = webcam.take_and_display_photo()
Allows the user to start recording the audio (from the microphone), returning its contents when the user clicks 'finish'.
The audio may be auto-played at the end of the recording, using the optional parameter auto_play
is equals to True
.
Example:
from colabutils import audio
audio_content = audio.record()
Allows the user to start recording the audio (from the microphone), saving it to a file when the user clicks 'finish'.
If no filename parameter is provided, default file name used in the process is audio.wav.
Example:
from colabutils import audio
audio_filename = audio.record_and_save()
Downloads the credentials from a URL and returns a service account credential object based on this file. Example:
from colabutils import gcp
creds = gcp.load_credentials("http://website.com/credential.json")
If no arguments are passed, it tries to look for a file named mlcredential.json
in the current user's Google Drive (or shared files).
Example:
from colabutils import gcp
creds = gcp.load_credentials()
The returned service account credential (in this case creds
) can be used in a GCP service client, such as Vision API:
from google.cloud import vision
client = vision.ImageAnnotatorClient(credentials=creds)
When downloading from GDrive, a custom filename can be provided, such as below:
creds = gcp.load_credentials(gdrivefile="custom_credential.json")
Describes the faces returned in face_annotations
by a Vision API face_detection
call.
Example:
resp = client.face_detection(image=my_image)
from colabutils import vision_utils
vision_utils.list_faces(resp.face_annotations)
Describes the faces returned in text_annotations
by a Vision API text_detection
call.
Example:
resp = client.text_detection(image=my_image)
from colabutils import vision_utils
vision_utils.list_annotations(resp.text_annotations)
Add the file ~/.pypirc
with the following content:
[distutils]
index-servers=pypi
[pypi]
repository = https://upload.pypi.org/legacy/
username = <your_username>
Make sure you have the latest versions of setuptools
, wheel
and twine
installed:
python3 -m pip install --user --upgrade setuptools wheel twine
Run this to generate the new version on the /dist
folder.
python3 setup.py sdist bdist_wheel
Run this to upload the contents of the /dist
folder to PyPI.
python3 -m twine upload dist/*