Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.5.0 #16

Merged
merged 16 commits into from
Oct 15, 2021
2 changes: 1 addition & 1 deletion .github/workflows/build_push_docker.yml
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ jobs:
m4dm4rtig4n/enedisgateway2mqtt:${{ steps.vars.outputs.version }}

- name: Discord notification
if: steps.check-tag.outputs.dev == 'false'
# if: steps.check-tag.outputs.dev == 'false'
env:
DISCORD_WEBHOOK: ${{ secrets.DISCORD_WEBHOOK }}
# DISCORD_EMBEDS: {color: 3447003, author: { name: client.user.username, icon_url: client.user.displayAvatarURL() }, title: "EnedisGateway2MQTT new version available => ${{ steps.vars.outputs.version }}", url: "https://hub.docker.com/r/m4dm4rtig4n/enedisgateway2mqtt", fields: [{ name: "Github", value: "https://github.com/m4dm4rtig4n/enedisgateway2mqtt"}, {name: "Docker.io", value: "https://hub.docker.com/r/m4dm4rtig4n/enedisgateway2mqtt"}], timestamp: new Date(), footer: {icon_url: client.user.displayAvatarURL(), text: "© m4dm4rtig4n"}}
Expand Down
94 changes: 78 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,12 @@ and curl test command.
The easiest way is to use Firefox in the consent process**


## EnedisGateway2MQTT limit

In order to avoid saturation of Enedis Gateway services, the number of API calls is limited to 15 per day.
Most of the information will be collected during the first launch.
You will just need a few days to report all "detailed" consumption over 2 years (about 1 week)

## Enedis Gateway limit

Enedis Gateway limit to 50 call per day / per pdl.
Expand All @@ -50,13 +56,15 @@ If you reach this limit, you will be banned for 24 hours!
| Parameters | Call number |
|:---------------|:---------------:|
| GET_CONSUMPTION | 3 |
| GET_CONSUMPTION_DETAIL | 105 |
| GET_PRODUCTION | 3 |
| GET_PRODUCTION_DETAIL | 105 |
| ADDRESSES | 1 |
| CONTRACT | 1 |

See chapter [persistance](#persistance), to reduce API call number.



## Environment variable

| Variable | Information | Mandatory/Default |
Expand All @@ -72,23 +80,47 @@ See chapter [persistance](#persistance), to reduce API call number.
| RETAIN | Retain data in MQTT | False |
| QOS | Quality Of Service MQTT | 0 |
| GET_CONSUMPTION | Enable API call to get your consumption | True |
| GET_CONSUMPTION_DETAIL | Enable API call to get your consumption in detail mode | True |
| GET_PRODUCTION | Enable API call to get your production | False |
| GET_PRODUCTION_DETAIL | Enable API call to get your production in detail mode | False |
| HA_AUTODISCOVERY | Enable auto-discovery | False |
| HA_AUTODISCOVERY_PREFIX | Home Assistant auto discovery prefix | homeassistant |
| BASE_PRICE | Price of kWh in base plan | 0 |
| CYCLE | Data refresh cycle (3600s minimum) | 3600 |
| OFFPEAK_HOURS | Force HP/HC format : "HHhMM-HHhMM;HHhMM-HHhMM;..." | "" |
| CONSUMPTION_PRICE_BASE | Price of kWh in base plan | 0 |
| CONSUMPTION_PRICE_HC | Price of HC kWh | 0 |
| CONSUMPTION_PRICE_HP | Price of HP kWh | 0 |
| CYCLE | Data refresh cycle (1h minimum) | 3600 |
| ADDRESSES | Get all addresses information | False |
| REFRESH_CONTRACT | Refresh contract data | False |
| REFRESH_ADDRESSES | Refresh addresses data | False |
| WIPE_CACHE | Force refresh all data (wipe all cached data) | False |
| DEBUG | Display debug information | False |

*Why is there no calculation for the HC / HP ?*
## Cache

The HC / HP calculations require a lot of API calls and the limit will be reached very quickly
Since v0.3, Enedis Gateway use SQLite database to store all data and reduce API call number.
> **Don't forget to mount /data to keep database persistance !!**

> Need database => Roadmap
If you change your contract, plan it is necessary to do a reset "**REFRESH_CONTRACT**" to "**True**"

## Persistance
if you move, it is necessary to make a "**REFRESH_ADDRESSES**" to "**True**"

Since v0.3, Enedis Gateway use SQLite database to store all data and reduce API call number.
Don't forget to mount /data to keep database persistance !!
If you want force refresh all data you can set environment variable "**WIPE_CACHE**" to "**True**".

**WARNING, This parameters wipe all data (addresses, contracts, consumption, production) and generate lot of API Call (don't forget [Enedis Gateway limit](#Enedis Gateway limit))**

> It doesn't forget that it takes several days to recover consumption/production in detail mode.

## Consumption BASE vs HP/HC

Even if you are on a basic plan (and not HP / HC), it is interesting to enter the prices of each plan.
The tool will do calculation for you and tell you which plan is the most advantageous for you based on your consumption.

### Blacklist

Sometimes there are holes in the Enedis consumption records. So I set up a blacklist system for certain dates.

If date does not return information after 7 try (7 x CYCLE), I blacklist this date and will no longer generate an API call

## Usage :

Expand All @@ -107,8 +139,15 @@ GET_CONSUMPTION="True"
GET_PRODUCTION="False"
HA_AUTODISCOVERY="False"
HA_AUTODISCOVERY_PREFIX='homeassistant'
CYCLE=86400
BASE_PRICE=0
CYCLE=3600
OFFPEAK_HOURS=""
CONSUMPTION_PRICE_BASE=0
CONSUMPTION_PRICE_HC=0
CONSUMPTION_PRICE_HP=0
REFRESH_CONTRACT="False"
REFRESH_ADDRESSES="False"
WIPE_CACHE="False"
DEBUG="False"

docker run -it --restart=unless-stopped \
-e ACCESS_TOKEN="$ACCESS_TOKEN" \
Expand All @@ -125,8 +164,15 @@ docker run -it --restart=unless-stopped \
-e GET_PRODUCTION="$GET_PRODUCTION" \
-e HA_AUTODISCOVERY="$HA_AUTODISCOVERY" \
-e HA_AUTODISCOVERY_PREFIX="$HA_AUTODISCOVERY_PREFIX" \
-e CYCLE="$CYCLE" \
-e BASE_PRICE="$BASE_PRICE" \
-e CYCLE="$CYCLE" \
-e OFFPEAK_HOURS="$OFFPEAK_HOURS" \
-e CONSUMPTION_PRICE_BASE="$CONSUMPTION_PRICE_BASE" \
-e CONSUMPTION_PRICE_HC="$CONSUMPTION_PRICE_HC" \
-e CONSUMPTION_PRICE_HP="$CONSUMPTION_PRICE_HP" \
-e REFRESH_CONTRACT="$REFRESH_CONTRACT" \
-e REFRESH_ADDRESSES="$REFRESH_ADDRESSES" \
-e WIPE_CACHE="$WIPE_CACHE" \
-e DEBUG="$DEBUG" \
-v $(pwd):/data
m4dm4rtig4n/enedisgateway2mqtt:latest
```
Expand Down Expand Up @@ -156,20 +202,36 @@ services:
HA_AUTODISCOVERY: "False"
HA_AUTODISCOVERY_PREFIX: 'homeassistant'
CYCLE: 86400
BASE_PRICE: 0.1445
OFFPEAK_HOURS: ""
CONSUMPTION_PRICE_BASE: 0
CONSUMPTION_PRICE_HC: 0
CONSUMPTION_PRICE_HP: 0
REFRESH_CONTRACT: "False"
REFRESH_ADDRESSES: "False"
WIPE_CACHE: "False"
DEBUG: "False"
volumes:
mydata:
```

## Roadmap

- Add **DJU18**
- Add HC/HP
- Create Home Assistant OS Addons
- Add Postgres/MariaDB connector*
- Add Postgres/MariaDB connector

## Change log:

### [0.5.0] - 2021-10-13

- Add HC/HP
- Rework database structure (all cached data are reset)
- Add new params to reset all cache.

### [0.4.1] - 2021-10-06

- Cache addresses & contracts data.

### [0.4.0] - 2021-10-05

- Switch locale to fr_FR.UTF8 (french date format)
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
0.4.0
0.5.0
77 changes: 55 additions & 22 deletions app/addresses.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,16 @@
main = import_module("main")
f = import_module("function")

def getAddresses(client, cur):
def getAddresses(client, con, cur):

def queryApi(url, headers, data, count=0):
addresses = f.apiRequest(cur, con, type="POST", url=f"{url}", headers=headers, data=json.dumps(data))
if not "error_code" in addresses:
query = f"INSERT OR REPLACE INTO addresses VALUES (?,?,?)"
cur.execute(query, [pdl, json.dumps(addresses), count])
con.commit()
return addresses

pdl = main.pdl
url = main.url
headers = main.headers
Expand All @@ -18,33 +27,57 @@ def getAddresses(client, cur):
"usage_point_id": str(pdl),
}

ha_discovery = {
pdl: {}
}

query = f"SELECT * FROM addresses WHERE pdl = '{pdl}'"
cur.execute(query)
query_result = cur.fetchone()
if query_result is None:
addresses = requests.request("POST", url=f"{url}", headers=headers, data=json.dumps(data)).json()
addresses_b64 = str(addresses)
addresses_b64 = addresses_b64.encode('ascii')
addresses_b64 = base64.b64encode(addresses_b64)
cur.execute(f"INSERT OR REPLACE INTO addresses VALUES ('{pdl}','{addresses_b64}')")
f.log(" => Query API")
addresses = queryApi(url, headers, data)
else:
addresses = json.loads(query_result[1])
if main.refresh_addresses == True:
f.log(" => Query API (Refresh Cache)")
addresses = queryApi(url, headers, data, 0)
else:
f.log(f" => Query Cache")
addresses = json.loads(query_result[1])
query = f"INSERT OR REPLACE INTO addresses VALUES (?,?,?)"
cur.execute(query, [pdl, json.dumps(addresses), 0])
con.commit()

pprint(addresses)
quit()

if not "customer" in addresses:
f.publish(client, f"{pdl}/consumption/current_year/error", str(1))
if 'error_code' in addresses:
f.log(addresses['description'])
ha_discovery = {
"error_code": True,
"detail": {
"message": addresses['description']
}
}
f.publish(client, f"{pdl}/addresses/error", str(1))
for key, value in addresses.items():
f.publish(client, f"{pdl}/consumption/current_year/errorMsg/{key}", str(value))
f.publish(client, f"{pdl}/addresses/errorMsg/{key}", str(value))
else:
customer = addresses["customer"]
f.publish(client, f"{pdl}/customer_id", str(customer["customer_id"]))
for usage_points in customer['usage_points']:
for usage_point_key, usage_point_data in usage_points['usage_point'].items():
if isinstance(usage_point_data, dict):
for usage_point_data_key, usage_point_data_data in usage_point_data.items():
f.publish(client, f"{pdl}/addresses/{usage_point_key}/{usage_point_data_key}",
str(usage_point_data_data))
else:
f.publish(client, f"{pdl}/addresses/{usage_point_key}", str(usage_point_data))
if "customer" in addresses:
customer = addresses["customer"]
f.publish(client, f"{pdl}/customer_id", str(customer["customer_id"]))
for usage_points in customer['usage_points']:
for usage_point_key, usage_point_data in usage_points['usage_point'].items():
if isinstance(usage_point_data, dict):
for usage_point_data_key, usage_point_data_data in usage_point_data.items():
f.publish(client, f"{pdl}/addresses/{usage_point_key}/{usage_point_data_key}",
str(usage_point_data_data))
else:
f.publish(client, f"{pdl}/addresses/{usage_point_key}", str(usage_point_data))
else:
ha_discovery = {
"error_code": True,
"detail": {
"message": addresses
}
}

return ha_discovery
Loading