Skip to content

Commit

Permalink
Merge pull request #16 from m4dm4rtig4n/0.5.0
Browse files Browse the repository at this point in the history
0.5.0
  • Loading branch information
m4dm4rtig4n authored Oct 15, 2021
2 parents 1a4b698 + 80db8a4 commit 038927e
Show file tree
Hide file tree
Showing 9 changed files with 867 additions and 234 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/build_push_docker.yml
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ jobs:
m4dm4rtig4n/enedisgateway2mqtt:${{ steps.vars.outputs.version }}
- name: Discord notification
if: steps.check-tag.outputs.dev == 'false'
# if: steps.check-tag.outputs.dev == 'false'
env:
DISCORD_WEBHOOK: ${{ secrets.DISCORD_WEBHOOK }}
# DISCORD_EMBEDS: {color: 3447003, author: { name: client.user.username, icon_url: client.user.displayAvatarURL() }, title: "EnedisGateway2MQTT new version available => ${{ steps.vars.outputs.version }}", url: "https://hub.docker.com/r/m4dm4rtig4n/enedisgateway2mqtt", fields: [{ name: "Github", value: "https://github.com/m4dm4rtig4n/enedisgateway2mqtt"}, {name: "Docker.io", value: "https://hub.docker.com/r/m4dm4rtig4n/enedisgateway2mqtt"}], timestamp: new Date(), footer: {icon_url: client.user.displayAvatarURL(), text: "© m4dm4rtig4n"}}
Expand Down
86 changes: 62 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,12 @@ and curl test command.
The easiest way is to use Firefox in the consent process**


## EnedisGateway2MQTT limit

In order to avoid saturation of Enedis Gateway services, the number of API calls is limited to 15 per day.
Most of the information will be collected during the first launch.
You will just need a few days to report all "detailed" consumption over 2 years (about 1 week)

## Enedis Gateway limit

Enedis Gateway limit to 50 call per day / per pdl.
Expand All @@ -50,8 +56,11 @@ If you reach this limit, you will be banned for 24 hours!
| Parameters | Call number |
|:---------------|:---------------:|
| GET_CONSUMPTION | 3 |
| GET_CONSUMPTION_DETAIL | 105 |
| GET_PRODUCTION | 3 |
| GET_PRODUCTION_DETAIL | 105 |
| ADDRESSES | 1 |
| CONTRACT | 1 |

See chapter [persistance](#persistance), to reduce API call number.

Expand All @@ -71,39 +80,42 @@ See chapter [persistance](#persistance), to reduce API call number.
| RETAIN | Retain data in MQTT | False |
| QOS | Quality Of Service MQTT | 0 |
| GET_CONSUMPTION | Enable API call to get your consumption | True |
| GET_CONSUMPTION_DETAIL | Enable API call to get your consumption in detail mode | True |
| GET_PRODUCTION | Enable API call to get your production | False |
| GET_PRODUCTION_DETAIL | Enable API call to get your production in detail mode | False |
| HA_AUTODISCOVERY | Enable auto-discovery | False |
| HA_AUTODISCOVERY_PREFIX | Home Assistant auto discovery prefix | homeassistant |
| BASE_PRICE | Price of kWh in base plan | 0 |
| CYCLE | Data refresh cycle (12h minimum) | 43200 |
| OFFPEAK_HOURS | Force HP/HC format : "HHhMM-HHhMM;HHhMM-HHhMM;..." | "" |
| CONSUMPTION_PRICE_BASE | Price of kWh in base plan | 0 |
| CONSUMPTION_PRICE_HC | Price of HC kWh | 0 |
| CONSUMPTION_PRICE_HP | Price of HP kWh | 0 |
| CYCLE | Data refresh cycle (1h minimum) | 3600 |
| ADDRESSES | Get all addresses information | False |
| FORCE_REFRESH | Force refresh all data (wipe all cached data) | False |

*Why is there no calculation for the HC / HP ?*

The HC / HP calculations require a lot of API calls and the limit will be reached very quickly.
> This feature will add soon.
| REFRESH_CONTRACT | Refresh contract data | False |
| REFRESH_ADDRESSES | Refresh addresses data | False |
| WIPE_CACHE | Force refresh all data (wipe all cached data) | False |
| DEBUG | Display debug information | False |

## Cache

Since v0.3, Enedis Gateway use SQLite database to store all data and reduce API call number.
Don't forget to mount /data to keep database persistance !!
> **Don't forget to mount /data to keep database persistance !!**
### Lifecycle
If you change your contract, plan it is necessary to do a reset "**REFRESH_CONTRACT**" to "**True**"

| Data type | Information | Refresh after |
|:---------------:|:---------------|:-----:|
| contracts | All contract informations | 7 run |
| addresses | All contact details | 7 run |
| consumption | Daily consumption | never |
| production | Daily production | never |
if you move, it is necessary to make a "**REFRESH_ADDRESSES**" to "**True**"

If you want force refresh all data you can set environment variable "**FORCE_REFRESH**" to "**True**".
If you want force refresh all data you can set environment variable "**WIPE_CACHE**" to "**True**".

**WARNING, This parameters wipe all data (addresses, contracts, consumption, production) and generate lot of API Call (don't forget [Enedis Gateway limit](#Enedis Gateway limit))**

> It doesn't forget that it takes several days to recover consumption/production in detail mode.
## Consumption BASE vs HP/HC

Even if you are on a basic plan (and not HP / HC), it is interesting to enter the prices of each plan.
The tool will do calculation for you and tell you which plan is the most advantageous for you based on your consumption.

### Blacklist

Sometimes there are holes in the Enedis consumption records. So I set up a blacklist system for certain dates.
Expand All @@ -127,8 +139,15 @@ GET_CONSUMPTION="True"
GET_PRODUCTION="False"
HA_AUTODISCOVERY="False"
HA_AUTODISCOVERY_PREFIX='homeassistant'
CYCLE=86400
BASE_PRICE=0
CYCLE=3600
OFFPEAK_HOURS=""
CONSUMPTION_PRICE_BASE=0
CONSUMPTION_PRICE_HC=0
CONSUMPTION_PRICE_HP=0
REFRESH_CONTRACT="False"
REFRESH_ADDRESSES="False"
WIPE_CACHE="False"
DEBUG="False"
docker run -it --restart=unless-stopped \
-e ACCESS_TOKEN="$ACCESS_TOKEN" \
Expand All @@ -145,8 +164,15 @@ docker run -it --restart=unless-stopped \
-e GET_PRODUCTION="$GET_PRODUCTION" \
-e HA_AUTODISCOVERY="$HA_AUTODISCOVERY" \
-e HA_AUTODISCOVERY_PREFIX="$HA_AUTODISCOVERY_PREFIX" \
-e CYCLE="$CYCLE" \
-e BASE_PRICE="$BASE_PRICE" \
-e CYCLE="$CYCLE" \
-e OFFPEAK_HOURS="$OFFPEAK_HOURS" \
-e CONSUMPTION_PRICE_BASE="$CONSUMPTION_PRICE_BASE" \
-e CONSUMPTION_PRICE_HC="$CONSUMPTION_PRICE_HC" \
-e CONSUMPTION_PRICE_HP="$CONSUMPTION_PRICE_HP" \
-e REFRESH_CONTRACT="$REFRESH_CONTRACT" \
-e REFRESH_ADDRESSES="$REFRESH_ADDRESSES" \
-e WIPE_CACHE="$WIPE_CACHE" \
-e DEBUG="$DEBUG" \
-v $(pwd):/data
m4dm4rtig4n/enedisgateway2mqtt:latest
```
Expand Down Expand Up @@ -176,20 +202,32 @@ services:
HA_AUTODISCOVERY: "False"
HA_AUTODISCOVERY_PREFIX: 'homeassistant'
CYCLE: 86400
BASE_PRICE: 0.1445
OFFPEAK_HOURS: ""
CONSUMPTION_PRICE_BASE: 0
CONSUMPTION_PRICE_HC: 0
CONSUMPTION_PRICE_HP: 0
REFRESH_CONTRACT: "False"
REFRESH_ADDRESSES: "False"
WIPE_CACHE: "False"
DEBUG: "False"
volumes:
mydata:
```

## Roadmap

- Add **DJU18**
- Add HC/HP
- Create Home Assistant OS Addons
- Add Postgres/MariaDB connector*
- Add Postgres/MariaDB connector

## Change log:

### [0.5.0] - 2021-10-13

- Add HC/HP
- Rework database structure (all cached data are reset)
- Add new params to reset all cache.

### [0.4.1] - 2021-10-06

- Cache addresses & contracts data.
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
0.4.1
0.5.0
65 changes: 43 additions & 22 deletions app/addresses.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,11 @@
def getAddresses(client, con, cur):

def queryApi(url, headers, data, count=0):
addresses = requests.request("POST", url=f"{url}", headers=headers, data=json.dumps(data)).json()
query = f"INSERT OR REPLACE INTO addresses VALUES (?,?,?)"
cur.execute(query, [pdl, json.dumps(addresses), count])
con.commit()
addresses = f.apiRequest(cur, con, type="POST", url=f"{url}", headers=headers, data=json.dumps(data))
if not "error_code" in addresses:
query = f"INSERT OR REPLACE INTO addresses VALUES (?,?,?)"
cur.execute(query, [pdl, json.dumps(addresses), count])
con.commit()
return addresses

pdl = main.pdl
Expand All @@ -26,37 +27,57 @@ def queryApi(url, headers, data, count=0):
"usage_point_id": str(pdl),
}

ha_discovery = {
pdl: {}
}

query = f"SELECT * FROM addresses WHERE pdl = '{pdl}'"
cur.execute(query)
query_result = cur.fetchone()
if query_result is None:
f.log(" => Query API")
addresses = queryApi(url, headers, data)
else:
count = query_result[2]
if count >= main.force_refresh_count:
if main.refresh_addresses == True:
f.log(" => Query API (Refresh Cache)")
addresses = queryApi(url, headers, data, 0)
else:
f.log(f" => Query Cache (refresh in {main.force_refresh_count-count} try)")
f.log(f" => Query Cache")
addresses = json.loads(query_result[1])
new_count = count + 1
query = f"INSERT OR REPLACE INTO addresses VALUES (?,?,?)"
cur.execute(query, [pdl, json.dumps(addresses), new_count])
cur.execute(query, [pdl, json.dumps(addresses), 0])
con.commit()

if not "customer" in addresses:
f.publish(client, f"{pdl}/consumption/current_year/error", str(1))

if 'error_code' in addresses:
f.log(addresses['description'])
ha_discovery = {
"error_code": True,
"detail": {
"message": addresses['description']
}
}
f.publish(client, f"{pdl}/addresses/error", str(1))
for key, value in addresses.items():
f.publish(client, f"{pdl}/consumption/current_year/errorMsg/{key}", str(value))
f.publish(client, f"{pdl}/addresses/errorMsg/{key}", str(value))
else:
customer = addresses["customer"]
f.publish(client, f"{pdl}/customer_id", str(customer["customer_id"]))
for usage_points in customer['usage_points']:
for usage_point_key, usage_point_data in usage_points['usage_point'].items():
if isinstance(usage_point_data, dict):
for usage_point_data_key, usage_point_data_data in usage_point_data.items():
f.publish(client, f"{pdl}/addresses/{usage_point_key}/{usage_point_data_key}",
str(usage_point_data_data))
else:
f.publish(client, f"{pdl}/addresses/{usage_point_key}", str(usage_point_data))
if "customer" in addresses:
customer = addresses["customer"]
f.publish(client, f"{pdl}/customer_id", str(customer["customer_id"]))
for usage_points in customer['usage_points']:
for usage_point_key, usage_point_data in usage_points['usage_point'].items():
if isinstance(usage_point_data, dict):
for usage_point_data_key, usage_point_data_data in usage_point_data.items():
f.publish(client, f"{pdl}/addresses/{usage_point_key}/{usage_point_data_key}",
str(usage_point_data_data))
else:
f.publish(client, f"{pdl}/addresses/{usage_point_key}", str(usage_point_data))
else:
ha_discovery = {
"error_code": True,
"detail": {
"message": addresses
}
}

return ha_discovery
Loading

0 comments on commit 038927e

Please sign in to comment.