-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix for camera image captures #61
Conversation
Pushed some changes that updates test_camera.py and removes the unused Edit: The difference here is the ir_camera.py JSON data doesn't have a |
You should probably modify and rename ir_camera.py so that it can create mock JSON for all the different types of cameras - or at least so that you can test the two known different capture URL's. Then you can modify the tests to verify that it uses the capture URL in the json to send the request. |
After looking into this, I've identified some changes I made to camera.py. We have to check what type of camera it is to determine whether to use That being said, I'm still stuck on updating the tests. I know what I want to do, just can't seem to figure out how to get it to work. So far I've created a new ipcam.py file in test/devices. This is to return the JSON data for an IP camera device. I thought about the idea you proposed of renaming ir_camera.py and having it be able to return different JSON data for different cameras, but I'm not sure how to do that while working with tests. Now I'm not quite sure how I would test the JSON data in ipcam.py without creating a new version of test_camera.py. I'm assuming I'd want to run all the same test methods against the ipcam.py JSON data just to make sure the code works with different JSON data. Anyways, I'll try to look at this some more later. I'm sure this is something super simple but my I'm a coding newbie (still). :) Edit: Committed my changes to camera.py and reverted the changes I made to test_camera.py and constants.py. |
@MisterWil, I’m a bit confused as to how
If I’m understanding the purpose of Edit: Actually, I think they are being used when methods like |
When this line is called in the test it executes a login flow since I explicitly called logout previously. The code will go through and actually issue requests, however since I've used requests_mock to say "if this URL is called, just return this mock response" it won't actually ever reach out to Abode. So, requests_mock will catch the given request URL's from the real code and return exactly what you tell it to return. This allows you to test your code against multiple possible return values and error states. It is also possible that some of the tests I have written include request_mock URL's that never get called for the tests intended purpose. This is simply because I got lazy and just copy/pasted blocks of the mock setup and modified what I needed to test the intended purpose. So when you write a test, start out by thinking "What am I trying to test?". For example: OK how do I test that the correct URL is called if its Camera A? Okay now how do I test if its Camera B? What if I Abode adds a new camera I don't recognize, can I test that? What if the capture URL returns a 404? What if the capture URL doesn't exist in my camera JSON? All of those should be separate test methods - you write mock JSON that simulates those different states and the tests verify that the code does what you want it to do. If the test fails you modify the code to work with your test, and then you re-run ALL tests to verify that you fixed your broken test but also all your previous tests still pass. You're literally just thinking of any possible case that might occur and writing tests to simulate it to verify the code is robust enough to handle it. Eventually you might start thinking about code coverage. When you run the AbodePy tests it will output what files have lines that are missing coverage. Coveralls commented above that The place you're "intended" to aim for is 100% coverage, but that is usually difficult and becomes even more difficult as the code base increases in size. Most of the time high-80's or 90's is good enough, and if real-world testing or use results in a new bug you can just write a test to recreate it so that the test fails, fix it in your code, re-run all the tests to see that you fixed the bug without breaking another test, and call it a day. |
Thanks for the info. Just committed my first hack at writing tests for specifically calling the
A few thoughts on this implementation:
|
I'm debating if this is better or not, but I currently modified the
Added to errors.py:
My thought here is being more explicit is probably better in the event the Abode ever changes the
Any thoughts/inputs? I haven't committed these changes yet. Edit: I'm also working on rewriting |
Just committed some of the changes mentioned above along with a lot of changes to test_camera.py. It's now testing both camera's JSON data against all these methods. Some things I'd still like to do if I can figure it out:
|
This is looking good. When you feel like you're done let me know and I'll merge it and push out a new version. :-) |
I think I'm done for now. I've hit a dead end on trying to figure out how to use requests_mock in the |
Just for your learning, I fixed a couple of linting issues with this commit that popped up when I ran |
Ah, sorry, not sure why my linter didn't catch those in Visual Studio Code. I'll make sure to run |
This fixes the
capture
method to work with both Abode Iota and the stand alone streaming cameras (#60). There's two things that still need to be done:CAMS_ID_CAPTURE_URL
from constants.py once the tests are updated since it will no longer be used.I'm not very good at writing tests so I'll likely need help on this. At a first glance, I think we need to create a camera.py file in tests/mock/devices that returns mock JSON data for an Abode camera. From there, I think the test_camera.py will need to be updated, specifically line 105 to make it match the changes in this pull request.