-
Notifications
You must be signed in to change notification settings - Fork 0
/
search.xml
2565 lines (2335 loc) · 316 KB
/
search.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="utf-8"?>
<search>
<entry>
<title>三维地图</title>
<url>/2021/04/14/3d-map/</url>
<content><![CDATA[<h1 id="三维地图建模调研报告"><a href="#三维地图建模调研报告" class="headerlink" title="三维地图建模调研报告"></a>三维地图建模调研报告</h1><blockquote>
<p>三维电子地图,或3D电子地图,就是以<a href="https://baike.baidu.com/item/三维" target="_blank" rel="noopener">三维</a>电子地图数据库为基础,按照一定<a href="https://baike.baidu.com/item/比例/5804241" target="_blank" rel="noopener">比例</a>对<a href="https://baike.baidu.com/item/现实世界/688877" target="_blank" rel="noopener">现实世界</a>或其中一部分的一个或多个方面的三维、<a href="https://baike.baidu.com/item/抽象/9021828" target="_blank" rel="noopener">抽象</a>的描述。<a href="https://baike.baidu.com/item/网络三维/11066401" target="_blank" rel="noopener">网络三维</a>电子地图不仅通过直观的地理实景模拟表现方式,为用户提供地图查询、出行导航等地图检索功能,同时集成生活资讯、<a href="https://baike.baidu.com/item/电子政务/1268" target="_blank" rel="noopener">电子政务</a>、电子商务、虚拟社区、出行导航等一系列服务。网络三维电子地图在给人们带来方便的同时,也给国家安全、社会稳定和人们隐私等带来威胁。</p>
</blockquote>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210414110643.jpg" alt="preview"></p>
<h1 id="0x00三维地图的作用"><a href="#0x00三维地图的作用" class="headerlink" title="0x00三维地图的作用"></a>0x00三维地图的作用</h1><p>三维地图作为一种记录环境信息的载体,还能作为地图查询、车辆导航等地图检索功能,具有实时、直观、可视化的优点。目前拿我们的项目来说,园区智能物流车、管道无人机巡检系统、园区巡检无人车都能利用上三维地图的这些优点。在东阳光乳源电化厂单总的交谈中得出了利用各种各样的无人车、无人机、智能设备完成电化厂的日常巡检、安防、排障、运输等功能的结论,除此之外单总还描绘了一个完全远程操控、实时操控、无人监管的宏伟蓝图。当然后者应该是一个长期的目标,我们要解决的是怎么完成前者的任务:利用大疆RTK行业无人机构建的三维地图</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210415133829.gif;charset=UTF-8" alt="智慧社区"></p>
<h1 id="0x01技术原理"><a href="#0x01技术原理" class="headerlink" title="0x01技术原理"></a>0x01技术原理</h1><p>地图是根据一定的数学法则,将自然地理的自然现象和<a href="https://baike.baidu.com/item/社会现象" target="_blank" rel="noopener">社会现象</a>通过概括和取舍用符号缩绘在平面上的图形。<a href="https://baike.baidu.com/item/电子地图" target="_blank" rel="noopener">电子地图</a>则是以<a href="https://baike.baidu.com/item/地图数据库" target="_blank" rel="noopener">地图数据库</a>为基础,在适当尺寸的屏幕上按照一定比例显示的地图。而三维电子地图就是以三维电子地图数据库为基础,按照一定比例对现实世界或其中一部分的一个或多个方面的<a href="https://baike.baidu.com/item/三维" target="_blank" rel="noopener">三维</a>、抽象的描述(或综合),其形象性、功能性远强于二维电子地图。结合发展迅速的<a href="https://baike.baidu.com/item/网络通信技术" target="_blank" rel="noopener">网络通信技术</a>和丰富的计算机网络资源,三维电子地图和<a href="https://baike.baidu.com/item/通信网络技术" target="_blank" rel="noopener">通信网络技术</a>相结合,就形成了简单易用的<a href="https://baike.baidu.com/item/网络三维" target="_blank" rel="noopener">网络三维</a>电子地图。网络三维电子地图通常运用网络拓扑技术、<a href="https://baike.baidu.com/item/数据库管理系统" target="_blank" rel="noopener">数据库管理系统</a>对物体实体的坐标进行数学建模,并且基于<a href="https://baike.baidu.com/item/GIS系统" target="_blank" rel="noopener">GIS系统</a>处理、WEB技术、<a href="https://baike.baidu.com/item/计算机图形学" target="_blank" rel="noopener">计算机图形学</a>、三维仿真技术和<a href="https://baike.baidu.com/item/虚拟现实技术" target="_blank" rel="noopener">虚拟现实技术</a>所实现。</p>
<p>必备以下五大模块知识:</p>
<ol>
<li>如何配置航测设备,如何判断设备优劣?</li>
<li>如何操作设备?</li>
<li>如何执行一项完整的航飞项目?</li>
<li>如何处理飞行数据,进行倾斜建模?</li>
<li>如何对模型进行精细化修饰?</li>
</ol>
<p>还需要学会以下六大软件操作:</p>
<ol>
<li>DJi + Altizure</li>
<li>Pix4D Mapper + Inpho</li>
<li>Context Capture + PhotoScan</li>
<li>Mirauge 3D</li>
<li>Terrasolid</li>
<li>ZR-modeler</li>
</ol>
<h1 id="三维实景地图模型"><a href="#三维实景地图模型" class="headerlink" title="三维实景地图模型"></a>三维实景地图模型</h1><blockquote>
<p>三维实景其实我们每天都在接触,那就是我们所见所得。三维实景英文称为3D IVR,它是一种运用数码相机对现有场景进行多角度环视拍摄然后进行后期缝合并加载播放程序来完成的一种三维虚拟展示技术。三维实景在浏览中可以由观赏者对图像进行放大、缩小、移动、多角度观看等操作。经过深入的编程,可实现场景中的热点链接、多场景之间虚拟漫游、雷达方位导航等功能。三维实景技术广泛应用于诸多领域网络虚拟展示。</p>
</blockquote>
<h3 id="三维实景技术有哪些特点?"><a href="#三维实景技术有哪些特点?" class="headerlink" title="三维实景技术有哪些特点?"></a>三维实景技术有哪些特点?</h3><ol>
<li>通过专业相机把现场场景完整、细致地拍摄记录下来,不留死角。再通过播放器将图片一切景致,多角度、全方位展示给访问者,一览无遗。</li>
<li>三维实景图像源自对真实场景的摄影捕捉,虽然通过实景制作出虚拟空间,但此虚拟空间完全源自于真实的场景,有别于电脑绘制出的虚拟空间,给访问者更加真实的视觉享受。</li>
<li>360度环视播放效果,让访问者置身于三维立体空间里,任意穿行、观赏,身临其境,享受虚拟世界带来的奇妙幻境。</li>
</ol>
<h3 id="空间三维实景(全景展示)优势:"><a href="#空间三维实景(全景展示)优势:" class="headerlink" title="空间三维实景(全景展示)优势:"></a>空间三维实景(全景展示)优势:</h3><ol>
<li>播放终端没有特别要求,一般大众化电脑均能播放。</li>
<li>无需下载播放插件,省去访问者下载的麻烦,播放浏览无障碍。</li>
<li>网络推广没有特殊技术要求,方便加载推广。</li>
</ol>
<h3 id="应用方面:"><a href="#应用方面:" class="headerlink" title="应用方面:"></a>应用方面:</h3><ol>
<li><p>通过官方网站参观企业的厂区、厂房、大型设备、全方位展示企业基础实力,在浏览中提升企业形象</p>
</li>
<li><p>将三维实景App安装到Ipad、Iphone等移动设备上。 通过Ipad或Iphone等便携设备在展会或会谈时展示介绍企业重点厂房设备,使用先进的科技产品,在创新中增加企业美誉度。</p>
</li>
<li><p>将三维实景标注到Google地图,百度地图上。 在地图上搜索企业位置,直接点击进入三维实景展示,方便查找企业方位,在细节中完善企业行业品牌.</p>
</li>
<li><p>构建三维实景地图配合信息化图标进行远程巡检、内容查看、生产效率调节等</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210419162546.jpg" alt="中维空间--智慧校园可视化管理"></p>
</li>
</ol>
<h1 id="无人机倾斜摄影"><a href="#无人机倾斜摄影" class="headerlink" title="无人机倾斜摄影"></a>无人机倾斜摄影</h1><p>(<a href="https://blog.csdn.net/modeling3D/article/details/115229274?utm_medium=distribute.pc_relevant.none-task-blog-baidujs_utm_term-0&spm=1001.2101.3001.4242" target="_blank" rel="noopener">csdn</a>)</p>
<blockquote>
<p>倾斜航空摄测量技术是测绘领域近年来发展迅猛 的一项高新技术,它通过多角度的拍摄得到同一地物不 同角度的倾斜影像,从而获取传统航空摄影测量不能获取的建筑物侧面纹理,目前在数字城市、智慧城市等的建设中应用广泛。在文物保护修复领域中,实景三维模型是最能真实反映文化遗产现状的表示方式,是文化遗产保护的重要数据基础。但由于文化遗产具有结构复杂等特殊性,要求获取构建的三维模型成果的分辨率、清晰度和材质颜色指标更高,增加了倾斜摄影数据获取难度。中维空间结合吴哥古迹保护项目,探索了如何使用无人机倾斜摄影测量技术实现高精细文化遗产实景三维模型的快速重构,为文物的保护、修复和研究提供基础数据的支撑。</p>
</blockquote>
<p>无人机倾斜摄影的具体应用有很多,如三维实景建模、三维城市地图、土石量算、矿区勘察、村镇地籍量算、管道铁道、山林勘察,通过三维建模之后,可以实现标绘、测量、分析、模型优化、内容演示、三维全景展示、可视化管理平台等。今天的重点是倾斜摄影在测量方面的应用和原理。</p>
<h3 id="1-无人机倾斜航空摄影"><a href="#1-无人机倾斜航空摄影" class="headerlink" title="1.无人机倾斜航空摄影"></a>1.<strong>无人机倾斜航空摄影</strong></h3><p>倾斜航空摄测量是国际测绘领域近年来倡导使用的一项高新技术,可同时从一个正摄、四个倾斜等 5 个不同的角度采集影像数据( 如图 1 所示) ,不但可获取正面影像信息,还可同时获取地物的多侧面影像信息,凭借其工期短、成本低和效率高等优势,在数字城市建设、应急指挥、国土安全、城市管理中得到了广泛的应用。</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210419163440.jpg" alt="img"></p>
<p>相对于传统垂直下视航空摄影影像,倾斜航空拍摄的影像由于拍摄时的角度是倾斜的,所以影像具有变化的比例,倾斜拍摄影像对应的地面区域形状像一个梯形,在区域梯形前端拍摄的影像像素比梯形后端的像素高。如图 2 所示,T 为区域内目标点; T’为区域目标点在影像</p>
<p>上对应的点; O 为摄影中心; h 为飞行高度; c 为相机参数; t 为倾斜影像倾角; α 为倾斜影像倾角半角; β 为区域目标点 T 与摄影中心的连线与竖直方向的夹角; PP 为倾斜摄影相机主光轴与地面的交点; PP’为倾斜影像的像主点。</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210419163514.jpg" alt="img"></p>
<p>因倾斜摄影存在高地物遮挡低地物的缺陷,传统大飞机航空摄影倾斜航空摄影飞行相对高度多为 300 m 以 上,影像地面分辨率为 5—20 cm,无法满足文化遗产高精度、精细化的要求。为获取优于 5 cm 地面分辨率的全方位无漏洞的文化遗产影像数据,需要降低飞行高度,增大航向和旁向重叠度,采用无人机进行低空飞行拍摄。</p>
<p>和传统航空摄影的大飞机相比,无人机具有方便、快 捷、成本低廉的独特优势。一般来说,无人机航摄通过无线设备来控制和操作不载人的飞行器,可通过航高的调节实现高空间大范围影像获取和低空间小区域精确航拍,还能够针对文化遗产数据获取的需求,对具体建筑物</p>
<p>进行 360°的立体环绕飞行,获取目标地物正射、倾斜航空影像,极大地增强了数据获取的科学性和时效性。</p>
<h3 id="2-文物建筑三维建模(中维空间案例)"><a href="#2-文物建筑三维建模(中维空间案例)" class="headerlink" title="2.文物建筑三维建模(中维空间案例)"></a>2.文物建筑三维建模(中维空间案例)</h3><p>针对文化遗产大场景与重要建筑物的实景三维模型需要不同精细程度的模型的难题,文中对无人机倾斜摄影设计方案进行了优化( 如图 3 所示) ,采用带状倾斜航空摄影+环状倾斜航空摄影的技术方案,得到遗址区的航摄数据,最后融合两种影像数据生成遗产区的实景三维</p>
<p>模型数据。从而在保证大区域实景三维模型的效果基础上,确保了重要遗址建筑物实景三维模型的精细程度,下面将以王家花园王宫为例介绍基于无人机倾斜航空摄影的文化遗产三维建模工艺流程。</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210419163615.jpg" alt="img"></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210419153953.jpg" alt="倾斜摄影无人机"></p>
<p>传统航空摄影只能从垂直角度拍摄地物,倾斜摄影则通过在同一平台搭载多台传感器,同时从垂直、侧视等不同的角度采集影像,有效弥补了传统航空摄影的局限。那么,无人机倾斜摄影系统可以定义为: 以无人机为飞行平台,以倾斜摄影相机为任务设备的航空影像获取系统。</p>
<p>无人机倾斜摄影技术通过超低空倾斜摄影,从一个垂直和四个特定角度倾斜方向获取高清立体影像数据,并多角度采集信息,配合控制点或影像POS信息,影像上每个点都会有三维坐标,基于影像数据可对任意点线面进行量测,获取厘米级的测量精度并自动生成三维地理信息模型,快速获取地理信息,对建筑物等地物高度直接量算;影像中包含丰富的真实环境信息,可对影像信息的数据深度挖掘,具有高效率、低成本、数据精确、操作灵活、侧面信息可用等优点,极大调节测绘内、外业的协同工作,解决了天气等外因造成的传统人工作业延误。</p>
<h1 id="0x02软件"><a href="#0x02软件" class="headerlink" title="0x02软件"></a>0x02软件</h1><h2 id="1-smart-3D"><a href="#1-smart-3D" class="headerlink" title="1.smart 3D"></a>1.smart 3D</h2><p>大疆无人机三维建模教程 只要是无人机就可以(玩具除外)mavic mini也可以建模 ContextCaptureMaster/Smart3D软件入门<a href="https://www.bilibili.com/video/BV1e7411J7YA/?spm_id_from=333.788.recommend_more_video.0" target="_blank" rel="noopener">教程</a></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421081528.png" alt="image-20210419153307045"></p>
]]></content>
</entry>
<entry>
<title>基于ArUco的距离角度定位</title>
<url>/2020/08/19/aruco-2d/</url>
<content><![CDATA[<h1 id="基于ArUco的距离角度定位"><a href="#基于ArUco的距离角度定位" class="headerlink" title="基于ArUco的距离角度定位"></a>基于ArUco的距离角度定位</h1><blockquote>
<p>利用aruco.estimatePoseSingleMarkers()函数返回找到的aurco标签的rvec旋转矩阵、tvec位移矩阵进行换算,找出aurco相对于相机cam的距离和角度,实现利用aurco进行定位</p>
</blockquote>
<figure class="highlight python"><table><tr><td class="code"><pre><span class="line"><span class="keyword">import</span> numpy <span class="keyword">as</span> np</span><br><span class="line"><span class="keyword">import</span> time</span><br><span class="line"><span class="keyword">import</span> cv2</span><br><span class="line"><span class="keyword">import</span> cv2.aruco <span class="keyword">as</span> aruco</span><br><span class="line"><span class="keyword">import</span> math</span><br><span class="line"><span class="comment">#加载鱼眼镜头的yaml标定文件,检测aruco并且估算与标签之间的距离,获取偏航,俯仰,滚动</span></span><br><span class="line"></span><br><span class="line"><span class="comment">#加载相机纠正参数</span></span><br><span class="line">cv_file = cv2.FileStorage(<span class="string">"yuyan.yaml"</span>, cv2.FILE_STORAGE_READ)</span><br><span class="line">camera_matrix = cv_file.getNode(<span class="string">"camera_matrix"</span>).mat()</span><br><span class="line">dist_matrix = cv_file.getNode(<span class="string">"dist_coeff"</span>).mat()</span><br><span class="line">cv_file.release()</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment">#默认cam参数</span></span><br><span class="line"><span class="comment"># dist=np.array(([[-0.58650416 , 0.59103816, -0.00443272 , 0.00357844 ,-0.27203275]]))</span></span><br><span class="line"><span class="comment"># newcameramtx=np.array([[189.076828 , 0. , 361.20126638]</span></span><br><span class="line"><span class="comment"># ,[ 0 ,2.01627296e+04 ,4.52759577e+02]</span></span><br><span class="line"><span class="comment"># ,[0, 0, 1]])</span></span><br><span class="line"><span class="comment"># mtx=np.array([[398.12724231 , 0. , 304.35638757],</span></span><br><span class="line"><span class="comment"># [ 0. , 345.38259888, 282.49861858],</span></span><br><span class="line"><span class="comment"># [ 0., 0., 1. ]])</span></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line">cap = cv2.VideoCapture(<span class="number">0</span>)</span><br><span class="line"><span class="comment"># cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'))</span></span><br><span class="line"><span class="comment"># cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)</span></span><br><span class="line"><span class="comment"># cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)</span></span><br><span class="line"></span><br><span class="line">font = cv2.FONT_HERSHEY_SIMPLEX <span class="comment">#font for displaying text (below)</span></span><br><span class="line"></span><br><span class="line"><span class="comment">#num = 0</span></span><br><span class="line"><span class="keyword">while</span> <span class="literal">True</span>:</span><br><span class="line"> ret, frame = cap.read()</span><br><span class="line"> h1, w1 = frame.shape[:<span class="number">2</span>]</span><br><span class="line"> <span class="comment"># 读取摄像头画面</span></span><br><span class="line"> <span class="comment"># 纠正畸变</span></span><br><span class="line"> newcameramtx, roi = cv2.getOptimalNewCameraMatrix(camera_matrix, dist_matrix, (h1, w1), <span class="number">0</span>, (h1, w1))</span><br><span class="line"> dst1 = cv2.undistort(frame, camera_matrix, dist_matrix, <span class="literal">None</span>, newcameramtx)</span><br><span class="line"> x, y, w1, h1 = roi</span><br><span class="line"> dst1 = dst1[y:y + h1, x:x + w1]</span><br><span class="line"> frame=dst1</span><br><span class="line"></span><br><span class="line"> <span class="comment">#灰度化,检测aruco标签,所用字典为6×6——250</span></span><br><span class="line"> gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)</span><br><span class="line"> aruco_dict = aruco.Dictionary_get(aruco.DICT_6X6_250)</span><br><span class="line"> parameters = aruco.DetectorParameters_create()</span><br><span class="line"></span><br><span class="line"> <span class="comment">#使用aruco.detectMarkers()函数可以检测到marker,返回ID和标志板的4个角点坐标</span></span><br><span class="line"> corners, ids, rejectedImgPoints = aruco.detectMarkers(gray,aruco_dict,parameters=parameters)</span><br><span class="line"></span><br><span class="line"><span class="comment"># 如果找不打id</span></span><br><span class="line"> <span class="keyword">if</span> ids <span class="keyword">is</span> <span class="keyword">not</span> <span class="literal">None</span>:</span><br><span class="line"> <span class="comment">#获取aruco返回的rvec旋转矩阵、tvec位移矩阵</span></span><br><span class="line"> rvec, tvec, _ = aruco.estimatePoseSingleMarkers(corners, <span class="number">0.05</span>, camera_matrix, dist_matrix)</span><br><span class="line"> <span class="comment"># 估计每个标记的姿态并返回值rvet和tvec ---不同</span></span><br><span class="line"> <span class="comment">#rvec为旋转矩阵,tvec为位移矩阵</span></span><br><span class="line"> <span class="comment"># from camera coeficcients</span></span><br><span class="line"> (rvec-tvec).any() <span class="comment"># get rid of that nasty numpy value array error</span></span><br><span class="line"> <span class="comment">#print(rvec)</span></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> <span class="comment">#在画面上 标注auruco标签的各轴</span></span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(rvec.shape[<span class="number">0</span>]):</span><br><span class="line"> aruco.drawAxis(frame, camera_matrix, dist_matrix, rvec[i, :, :], tvec[i, :, :], <span class="number">0.03</span>)</span><br><span class="line"> aruco.drawDetectedMarkers(frame, corners,ids)</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> <span class="comment">###### 显示id标记 #####</span></span><br><span class="line"> cv2.putText(frame, <span class="string">"Id: "</span> + str(ids), (<span class="number">0</span>,<span class="number">64</span>), font, <span class="number">1</span>, (<span class="number">0</span>,<span class="number">255</span>,<span class="number">0</span>),<span class="number">2</span>,cv2.LINE_AA)</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> <span class="comment">###### 角度估计 #####</span></span><br><span class="line"> <span class="comment">#print(rvec)</span></span><br><span class="line"> <span class="comment">#考虑Z轴(蓝色)的角度</span></span><br><span class="line"> <span class="comment">#本来正确的计算方式如下,但是由于蜜汁相机标定的问题,实测偏航角度能最大达到104°所以现在×90/104这个系数作为最终角度</span></span><br><span class="line"> deg=rvec[<span class="number">0</span>][<span class="number">0</span>][<span class="number">2</span>]/math.pi*<span class="number">180</span></span><br><span class="line"> <span class="comment">#deg=rvec[0][0][2]/math.pi*180*90/104</span></span><br><span class="line"> <span class="comment"># 旋转矩阵到欧拉角</span></span><br><span class="line"> R=np.zeros((<span class="number">3</span>,<span class="number">3</span>),dtype=np.float64)</span><br><span class="line"> cv2.Rodrigues(rvec,R)</span><br><span class="line"> sy=math.sqrt(R[<span class="number">0</span>,<span class="number">0</span>] * R[<span class="number">0</span>,<span class="number">0</span>] + R[<span class="number">1</span>,<span class="number">0</span>] * R[<span class="number">1</span>,<span class="number">0</span>])</span><br><span class="line"> singular=sy< <span class="number">1e-6</span></span><br><span class="line"> <span class="keyword">if</span> <span class="keyword">not</span> singular:<span class="comment">#偏航,俯仰,滚动</span></span><br><span class="line"> x = math.atan2(R[<span class="number">2</span>, <span class="number">1</span>], R[<span class="number">2</span>, <span class="number">2</span>])</span><br><span class="line"> y = math.atan2(-R[<span class="number">2</span>, <span class="number">0</span>], sy)</span><br><span class="line"> z = math.atan2(R[<span class="number">1</span>, <span class="number">0</span>], R[<span class="number">0</span>, <span class="number">0</span>])</span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> x = math.atan2(-R[<span class="number">1</span>, <span class="number">2</span>], R[<span class="number">1</span>, <span class="number">1</span>])</span><br><span class="line"> y = math.atan2(-R[<span class="number">2</span>, <span class="number">0</span>], sy)</span><br><span class="line"> z = <span class="number">0</span></span><br><span class="line"> <span class="comment"># 偏航,俯仰,滚动换成角度</span></span><br><span class="line"> rx = x * <span class="number">180.0</span> / <span class="number">3.141592653589793</span></span><br><span class="line"> ry = y * <span class="number">180.0</span> / <span class="number">3.141592653589793</span></span><br><span class="line"> rz = z * <span class="number">180.0</span> / <span class="number">3.141592653589793</span></span><br><span class="line"></span><br><span class="line"> cv2.putText(frame,<span class="string">'deg_z:'</span>+str(ry)+str(<span class="string">'deg'</span>),(<span class="number">0</span>, <span class="number">140</span>), font, <span class="number">1</span>, (<span class="number">0</span>, <span class="number">255</span>, <span class="number">0</span>), <span class="number">2</span>,</span><br><span class="line"> cv2.LINE_AA)</span><br><span class="line"> <span class="comment">#print("偏航,俯仰,滚动",rx,ry,rz)</span></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> <span class="comment">###### 距离估计 #####</span></span><br><span class="line"> distance = ((tvec[<span class="number">0</span>][<span class="number">0</span>][<span class="number">2</span>] + <span class="number">0.02</span>) * <span class="number">0.0254</span>) * <span class="number">100</span> <span class="comment"># 单位是米</span></span><br><span class="line"> <span class="comment">#distance = (tvec[0][0][2]) * 100 # 单位是米</span></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> <span class="comment"># 显示距离</span></span><br><span class="line"> cv2.putText(frame, <span class="string">'distance:'</span> + str(round(distance, <span class="number">4</span>)) + str(<span class="string">'m'</span>), (<span class="number">0</span>, <span class="number">110</span>), font, <span class="number">1</span>, (<span class="number">0</span>, <span class="number">255</span>, <span class="number">0</span>), <span class="number">2</span>,</span><br><span class="line"> cv2.LINE_AA)</span><br><span class="line"></span><br><span class="line"> <span class="comment">####真实坐标换算####(to do)</span></span><br><span class="line"> <span class="comment"># print('rvec:',rvec,'tvec:',tvec)</span></span><br><span class="line"> <span class="comment"># # new_tvec=np.array([[-0.01361995],[-0.01003278],[0.62165339]])</span></span><br><span class="line"> <span class="comment"># # 将相机坐标转换为真实坐标</span></span><br><span class="line"> <span class="comment"># r_matrix, d = cv2.Rodrigues(rvec)</span></span><br><span class="line"> <span class="comment"># r_matrix = -np.linalg.inv(r_matrix) # 相机旋转矩阵</span></span><br><span class="line"> <span class="comment"># c_matrix = np.dot(r_matrix, tvec) # 相机位置矩阵</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> <span class="comment">##### DRAW "NO IDS" #####</span></span><br><span class="line"> cv2.putText(frame, <span class="string">"No Ids"</span>, (<span class="number">0</span>,<span class="number">64</span>), font, <span class="number">1</span>, (<span class="number">0</span>,<span class="number">255</span>,<span class="number">0</span>),<span class="number">2</span>,cv2.LINE_AA)</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> <span class="comment"># 显示结果画面</span></span><br><span class="line"> cv2.imshow(<span class="string">"frame"</span>,frame)</span><br><span class="line"></span><br><span class="line"> key = cv2.waitKey(<span class="number">1</span>)</span><br><span class="line"></span><br><span class="line"> <span class="keyword">if</span> key == <span class="number">27</span>: <span class="comment"># 按esc键退出</span></span><br><span class="line"> print(<span class="string">'esc break...'</span>)</span><br><span class="line"> cap.release()</span><br><span class="line"> cv2.destroyAllWindows()</span><br><span class="line"> <span class="keyword">break</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">if</span> key == ord(<span class="string">' '</span>): <span class="comment"># 按空格键保存</span></span><br><span class="line"><span class="comment"># num = num + 1</span></span><br><span class="line"><span class="comment"># filename = "frames_%s.jpg" % num # 保存一张图像</span></span><br><span class="line"> filename = str(time.time())[:<span class="number">10</span>] + <span class="string">".jpg"</span></span><br><span class="line"> cv2.imwrite(filename, frame)</span><br></pre></td></tr></table></figure>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200819094339.png" alt=""><br><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200819094340.png" alt=""></p>
<p>项目地址:<a href="https://github.com/ZengWenJian123/aruco_positioning_2D" target="_blank" rel="noopener">https://github.com/ZengWenJian123/aruco_positioning_2D</a></p>
<p>博客地址:<a href="https://blog.dgut.top/2020/08/19/aruco-2d/">https://blog.dgut.top/2020/08/19/aruco-2d/</a></p>
<p>csdn:<a href="https://blog.csdn.net/dgut_guangdian/article/details/108093643" target="_blank" rel="noopener">https://blog.csdn.net/dgut_guangdian/article/details/108093643</a></p>
]]></content>
<tags>
<tag>opencv</tag>
<tag>aruco</tag>
<tag>python</tag>
</tags>
</entry>
<entry>
<title>大疆智图使用教程</title>
<url>/2021/06/11/dji/</url>
<content><![CDATA[<h1 id="大疆智图使用教程"><a href="#大疆智图使用教程" class="headerlink" title="大疆智图使用教程"></a>大疆智图使用教程</h1><p>by zwj 2021/6/11</p>
<blockquote>
<p>大疆智图是一款提供自主航线规划、飞行航拍、二维正射影像与三维模型重建的 PC 应用软件。一站式解决方案帮助行业用户全面提升航测内外业效率,将真实场景转化为数字资产。</p>
<p>官网地址: <a href="https://www.dji.com/cn/dji-terra" target="_blank" rel="noopener">https://www.dji.com/cn/dji-terra</a></p>
</blockquote>
<h2 id="简介"><a href="#简介" class="headerlink" title="简介"></a>简介</h2><p>“御” 2 行业进阶版拥有更高清、流畅的热成像传感器和更高像素的可见光传感器,支持 32 倍数码变焦,可搭载 RTK 模块实现厘米级定位,便携、可靠,高效洞悉作业现场细节。</p>
<ul>
<li>640*512 30Hz 热成像相机 </li>
<li>4800万像素可见光相机</li>
<li>32倍率数码变焦</li>
<li>厘米级RTK定位</li>
<li>10km高清图传</li>
<li>六向避障</li>
</ul>
<p>实际到货套装清单:</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171439.png" alt="Snipaste_2021-06-11_09-12-37"></p>
<p>RTK模块安装示意图:</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171440.jpeg" alt="RTK模块安装示意图"></p>
<h2 id="飞行前准备(外业)"><a href="#飞行前准备(外业)" class="headerlink" title="飞行前准备(外业)"></a>飞行前准备(外业)</h2><p>首先启动遥控器、飞行器,使得二者连接成功(电源按钮的启动方式是短按再长按2秒)</p>
<h3 id="1-航线规划"><a href="#1-航线规划" class="headerlink" title="1.航线规划"></a>1.航线规划</h3><p>打开无人机遥控器进入<code>飞行界面Pilot</code> </p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171441.png" alt="Snipaste_2021-06-11_10-30-17"></p>
<p>再<code>飞行界面</code>中选择航线飞行</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171442.png" alt="Screenshot_20210611-085141"></p>
<p>选择所需要添加的航线飞行规划模式:</p>
<p>1.航点飞行:设置航点,无人机将会按照设置路线飞行</p>
<p>2.建图航拍:框选一个区域,无人机将会进行二维正摄摄影得到一个二维地图</p>
<p>3.倾斜摄影:框选一个区域,无人机将会进行二维正摄和三维倾斜摄影得到一个三维重建模型</p>
<p>4.航带飞行:框选一个带状区域,无人机将会进行航带飞行二维正摄摄影得到一个带状二维重建模型</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171443.png" alt="Snipaste_2021-06-11_10-44-29"></p>
<p>这里选择了<code>倾斜摄影</code>模式用于重建这个区域的三维模型</p>
<ul>
<li>通过移动航点确定一个需要拍摄的区域(深蓝色)</li>
<li>飞控软件将会自动规划飞行器拍摄航线飞行区域(浅蓝色)</li>
</ul>
<p>完成航点规划后,飞控软件会生成五条航线分别是:航线1正摄摄影航线;航线2-5倾斜摄影航线。</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171444.png" alt="Snipaste_2021-06-11_10-57-26"></p>
<p>打开航线规划的设置选项卡选择相应的无人机机型、云台倾斜角度、和拍照模式等设置选项,这些选项将会影响拍摄航线的长度和最后重建的质量。</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171445.png" alt="Snipaste_2021-06-11_11-08-12"></p>
<p>全部完成之后选择<code>保存</code>或者<code>执行</code></p>
<hr>
<h2 id="三维重建(内业)"><a href="#三维重建(内业)" class="headerlink" title="三维重建(内业)"></a>三维重建(内业)</h2><p>完成外业航拍飞行任务之后将无人机上拍照到的照片导入到电脑,电脑上运行大疆智图Terra软件</p>
<h3 id="1-硬件要求"><a href="#1-硬件要求" class="headerlink" title="1.硬件要求"></a>1.硬件要求</h3><ul>
<li>计算机要求:</li>
</ul>
<ol>
<li>中央处理器( CPU): CPU 性能极大程度影响重建速度,推荐使用核心数多、频<br>率快的 CPU。推荐 Intel Core I7/I9/I10 和 AMD Ryzen 系列。</li>
<li>显卡( GPU):显卡性能较大程度影响重建速度,目前仅支持显存 4G 以上的英伟达显卡,建议使用显存较大、核心数多、频率快、工艺新的显卡,推荐2080Ti、 3080Ti 系列</li>
<li>内存:内存的大小决定了空三能处理的影像数量,大约每空闲 1GB 内存可以处<br>理 400 张精灵 4 RTK 采集的影像,根据测区具体情况,以上数据会有一定程度<br>的浮动。 建议至少 32G,推荐 64GB 及以上。</li>
<li>硬盘: 主节点设备一般配备 1TB 的机械硬盘, 推荐加装一个大容量的固态硬盘SSD,<br>用于存储部分数据和日常使用。 子节点设备一般配备 1TB 的机械硬盘, 推荐加装<br>固态硬盘,用于安装系统及常用软件, 数据可存储在磁盘阵列服务器。<br>主节点的缓存目录和子节点的本地临时存储目录( 在子节点设备创建的本地目<br>录,用于存储子节点计算时的临时文件,可定期清理空间以免影响重建) 都设置<br>在固态硬盘,可提高重建速度。 </li>
</ol>
<ul>
<li><p>网络存储服务器( NAS) (可选)</p>
<p>推荐企业级磁盘阵列服务器用于存储项目文件及缓存文件,并提供足够的冗余及备份。<br>磁盘阵列服务器不用于计算,所以对 CPU、显卡等硬件性能要求低, 而对硬盘性能要求<br>较高, 并且由于内存可用于磁盘缓存,加快数据读写速度,建议内存大于 32G。 </p>
<p>可依据项目数据量来选择硬盘容量,推荐使用 NAS 专用硬盘(固态硬盘可较大程度提<br>高集群重建速度)。 NAS 设备建议使用万兆网卡增强吞吐能力。 对于数据规模较大的用<br>户,硬盘可选 10TB 的企业盘、 7200 转,总计容量可达到 100TB 以上。 </p>
<p>若条件不允许,也可使用一台普通电脑,通过设置 Windows 共享文件夹作为网络存储<br>目录。也可使用主节点的本地磁盘,通过设置 Windows 共享文件夹作为网络储存目<br>录。 </p>
</li>
</ul>
<h3 id="2-软件需求"><a href="#2-软件需求" class="headerlink" title="2.软件需求"></a>2.软件需求</h3><p>下载安装大疆智图 <a href="https://www.dji.com/dji-terra/info#downloads" target="_blank" rel="noopener">https://www.dji.com/dji-terra/info#downloads</a> </p>
<h3 id="3-三维重建"><a href="#3-三维重建" class="headerlink" title="3.三维重建"></a>3.三维重建</h3><p>打开大疆智图选择<code>新建任务</code>选项,根据项目要求选择相应的重建任务,这里选择<code>三维模型</code>为这次的任务类型</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171446.png" alt="image-20210611161016466"></p>
<p>创建好的任务后点击导入照片,导入我们无人机外业拍到的那些影像数据</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171447.png" alt="image-20210611161247162"></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171448.png" alt="image-20210611161255414"></p>
<p>通过<code>像控点管理选项卡</code>可以查看每一个航拍出来的图片的POS数据,记录了相机的姿态和定位信息,通过这些信息做空中三角测量就可以得出实际地图上的一个点的真实世界坐标</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171449.png" alt="Snipaste_2021-06-11_16-21-28"></p>
<p>通过<code>输出坐标系选项卡</code>选择<code>CGCS2000 114E</code>作为我们这个模型的输出坐标系</p>
<blockquote>
<p>CGCS2000–是中国大陆官方使用的坐标系</p>
</blockquote>
<p> <img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171450.png" alt="image-20210611162451848"></p>
<p>最后在<code>重建结果选项卡</code>中选择需要输出的模型类型</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171451.png" alt="image-20210611162527015"></p>
<p>设置妥当后即可开始重建</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171452.png" alt="image-20210611162644264"></p>
<hr>
<h2 id="重建结果"><a href="#重建结果" class="headerlink" title="重建结果"></a>重建结果</h2><p>大疆智图将会自动完成三维模型重建运算,最后的运算结果可以很直观的显示出来,如果勾选了其他三维格式比如说<code>obj</code>等通用格式,可以在工程文件夹中到处相应的三维格式到其他软件中。</p>
<p>三维效果:</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171453.png" alt="image-20210611163131270"></p>
<p>二维效果:</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171454.png" alt="image-20210611163456413"></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171455.png" alt="image-20210611163505200"></p>
<p>通过质量报告得知GDS(图像地面分辨率)为0.017米相当于建图的精度来到了厘米级别的精度。</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171456.png" alt="Snipaste_2021-06-11_16-43-43"></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171457.png" alt="Snipaste_2021-06-11_16-45-06"></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171458.png" alt="dsm_screennail"></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210611171459.png" alt="dom_screennail"></p>
]]></content>
<tags>
<tag>dji</tag>
</tags>
</entry>
<entry>
<title>docker学习</title>
<url>/2020/08/25/docker/</url>
<content><![CDATA[<h1 id="Docker学习记录"><a href="#Docker学习记录" class="headerlink" title="Docker学习记录"></a>Docker学习记录</h1><p>最近也是把基于aurco的视觉室内定位做好了,摄像头通过检测aruco码就可以获得相对距离和角度,再带入整个机器人的地图中就可以起到一个很好的辅助定位的功能。不过确定就是aruco标签的样式是无法更改的,就一张A4纸贴在墙面上非常影响美观,所以说下一步应该就是进行<code>物体检测</code>例如检测到一幅画、一面墙壁、一个楼梯等特征比较明显的物体来辅助定位。</p>
<p>要实现<code>物体检测</code>我觉得光靠opencv的级联分类器是远远不够的,所以项目肯定是要往速度学习上靠拢的。项目组正好有一台正在使用的GPU服务器,我粗略地看了下配置:cpu是两路志强<code>E5-2640 v4 @ 2.40GHz</code>,8路<code>gtx1080ti</code>,250G内存。</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200825101817.png" alt=""></p>
<p>目前有个问题就是,服务器目前运行着项目组其他成员的一些训练程序而且我们用的编程环境可能不同就会造成<code>cuda</code>、<code>tensorflow</code>环境错误。因此我们要使用<code>docker</code>作为训练环境的整体。本文就是记录一些<code>docker</code>的使用方法,作为初学者,记录一下还是很有必要的。</p>
<hr>
<h2 id="安装"><a href="#安装" class="headerlink" title="安装"></a>安装</h2><p><a href="https://baike.baidu.com/item/docker/13344470" target="_blank" rel="noopener">简介</a>:Docker是一个<a href="https://baike.baidu.com/item/开源/246339" target="_blank" rel="noopener">开源</a>的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的镜像中,然后发布到任何流行的 <a href="https://baike.baidu.com/item/Linux" target="_blank" rel="noopener">Linux</a>或<a href="https://baike.baidu.com/item/Windows/165458" target="_blank" rel="noopener">Windows</a> 机器上,也可以实现<a href="https://baike.baidu.com/item/虚拟化/547949" target="_blank" rel="noopener">虚拟化</a>。容器是完全使用<a href="https://baike.baidu.com/item/沙箱/393318" target="_blank" rel="noopener">沙箱</a>机制,相互之间不会有任何接口。</p>
<p> Docker <a href="https://www.runoob.com/docker/ubuntu-docker-install.html" target="_blank" rel="noopener">安装</a>:</p>
<figure class="highlight sh"><table><tr><td class="code"><pre><span class="line">curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun</span><br></pre></td></tr></table></figure>
<p>下载地址:<a href="https://links.jianshu.com/go?to=https%3A%2F%2Fhub.docker.com%2Feditions%2Fcommunity%2Fdocker-ce-desktop-windows" target="_blank" rel="noopener">Docker Desktop for Windows - Docker Hub</a></p>
<p>选择 stable 稳定版下载,傻瓜式安装过程,一键到底。</p>
<p>电脑重启后打开 Docker,点击右下角任务栏 Docker 的 Dashboard。</p>
<p>在终端输入docker后看到如下信息则证明安装成功:</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200825102534.png" alt=""></p>
<h2 id="使用"><a href="#使用" class="headerlink" title="使用"></a>使用</h2><p>最近在公司3楼布置了一圈AP网络,用于巡检机器人的调试,但是我们对这个网络的容量一直不是很清楚,今天在github上看到了<code>librespeed-speedtest</code>这个<a href="https://github.com/librespeed/speedtest" target="_blank" rel="noopener">项目</a>,并且这个项目支持docker部署,所以就拿来实践一下,顺带测试一下wifi局域网吞吐容量。<a href="https://hub.docker.com/r/adolfintel/speedtest" target="_blank" rel="noopener">docker地址</a></p>
<ul>
<li>机器人网络示意图</li>
</ul>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200825135102.png" alt=""></p>
<ul>
<li>speedtest docker项目</li>
</ul>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200825102722.png" alt=""></p>
<h2 id="部署"><a href="#部署" class="headerlink" title="部署"></a>部署</h2><p><a href="https://www.jianshu.com/p/00e8ae89224d" target="_blank" rel="noopener">简书</a></p>
<p>docker部署起来也是超简单,2分钟就好了</p>
<h3 id="1-镜像下载:"><a href="#1-镜像下载:" class="headerlink" title="1.镜像下载:"></a>1.镜像下载:</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">docker pull adolfintel/speedtest</span><br></pre></td></tr></table></figure>
<p>网络不好建议重复操作,若显示类似于下方文字,则说明下载完成:</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200825135411.png" alt=""></p>
<h3 id="2-启动docker"><a href="#2-启动docker" class="headerlink" title="2.启动docker"></a>2.启动docker</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">docker run -d -p 8080:80 adolfintel/speedtest:latest</span><br></pre></td></tr></table></figure>
<blockquote>
<ul>
<li>-d,后台运行(建议使用)</li>
<li>-p,端口映射(可自行修改其它端口)</li>
</ul>
</blockquote>
<p>此时,可以在之前的 Dashboard 中看到后台运行的容器。本机能打开网页 <a href="https://links.jianshu.com/go?to=http%3A%2F%2Flocalhost%3A8080" target="_blank" rel="noopener">http://localhost:8080</a> 也能说明服务启动成功。</p>
<h3 id="3-测速"><a href="#3-测速" class="headerlink" title="3.测速"></a>3.测速</h3><p>测速过程就非常傻瓜了~</p>
<p>局域网的其它设备打开网页 http://[PC IP]:8080 即可进行测速。如果不能访问,可能是防火墙、路由器设置或其它方面的问题。</p>
<figure class="highlight plain"><table><tr><td class="code"><pre><span class="line">http://192.168.2.182:8080/</span><br></pre></td></tr></table></figure>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200825135726.png" alt=""></p>
<h3 id="4-设置DMZ主机"><a href="#4-设置DMZ主机" class="headerlink" title="4.设置DMZ主机"></a>4.设置DMZ主机</h3><blockquote>
<p>让您得以将一部计算机公开显露在互联网上,使所有上传的封包全数转向您指定的计算机。这对您在运行一些使用非特定内传通信端口(incoming port)的应用程序时会相当有用。请谨慎使用。</p>
</blockquote>
<p>通俗来说就是电脑通过路由器链接到公司局域网,路由器的ip和电脑局域网ip不同(百层nat狗头),要把电脑设置为<code>DMZ主机</code>之后访问路由器ip就可以访问到测速网页了。</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200825140243.png" alt=""></p>
<h2 id="docker命令"><a href="#docker命令" class="headerlink" title="docker命令"></a>docker命令</h2><figure class="highlight shell"><table><tr><td class="code"><pre><span class="line">登录服务器http://192.168.221.11/</span><br><span class="line">ssh zwj@AISRV</span><br><span class="line">查看docker运行列表</span><br><span class="line">docker ps</span><br><span class="line">运行docker</span><br><span class="line">docker run -d -p 8081:80 adolfintel/speedtest:latest</span><br><span class="line">停止docker</span><br><span class="line">docker stop compassionate_knuth</span><br><span class="line">进入docker 启动docker</span><br><span class="line">cd work</span><br><span class="line">cd comprehen/</span><br><span class="line">./docker_start.sh </span><br><span class="line">./docker_into.sh</span><br></pre></td></tr></table></figure>
<h1 id="dockerd搭建宝塔管理面板"><a href="#dockerd搭建宝塔管理面板" class="headerlink" title="dockerd搭建宝塔管理面板"></a>dockerd搭建宝塔管理面板</h1><blockquote>
<p>宝塔Linux面板是提升运维效率的服务器管理软件,支持一键LAMP/LNMP/集群/监控/网站/FTP/数据库/JAVA等100多项服务器管理功能。<br>有30个人的专业团队研发及维护,经过200多个版本的迭代,功能全,少出错且足够安全,已获得全球百万用户认可安装。运维要高效,装宝塔。</p>
</blockquote>
<h2 id="安装步骤"><a href="#安装步骤" class="headerlink" title="安装步骤"></a>安装步骤</h2><p>先从<code>docker</code> 拉取一个<code>centos</code>镜像下来先</p>
<figure class="highlight shell"><table><tr><td class="code"><pre><span class="line">docker pull centos</span><br></pre></td></tr></table></figure>
<p>运行镜像</p>
<figure class="highlight shell"><table><tr><td class="code"><pre><span class="line">docker run -i -t -d --name baota-zwj -p 20:20 -p 21:21 -p 80:80 -p 443:443 -p 888:888 -p 8888:8888 -p 8084:8084 -p 8085:8085 --privileged=true -v /data1/zwj/baota:/www centos</span><br></pre></td></tr></table></figure>
<blockquote>
<p>-p 外部端口号:内部端口号,这里开放了20、21、80、443、888、8888、8084、8085端口</p>
</blockquote>
<blockquote>
<p>-v 本地路径:内部路径 想对于挂载一个硬盘到docker上去,这个硬盘可以在本地中映射到docker里</p>
</blockquote>
<p>进入宝塔docker</p>
<figure class="highlight shell"><table><tr><td class="code"><pre><span class="line">docker exec -it baota-zwj /bin/bash</span><br></pre></td></tr></table></figure>
<p>宝塔安装:</p>
<figure class="highlight shell"><table><tr><td class="code"><pre><span class="line">yum install -y wget && wget -O install.sh http://download.bt.cn/install/install_6.0.sh && sh install.sh</span><br></pre></td></tr></table></figure>
<p>安装成功后会显示宝塔的登录地址和账户密码一般是:<a href="http://localhost:8888" target="_blank" rel="noopener">http://localhost:8888</a> </p>
<p>启动</p>
<figure class="highlight shell"><table><tr><td class="code"><pre><span class="line">/etc/init.d/bt start</span><br></pre></td></tr></table></figure>
<p>登录</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200831142929.png" alt=""></p>
<p>进入面板</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200831142905.png" alt=""></p>
<p>资源监控</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200831143027.png" alt=""></p>
]]></content>
<tags>
<tag>ubuntu</tag>
<tag>docker</tag>
<tag>speedtest</tag>
</tags>
</entry>
<entry>
<title>2020年毕业一年总结</title>
<url>/2021/04/13/HEC2020/</url>
<content><![CDATA[<blockquote>
<h3 id="转眼就是2021年4月份了,时隔快一年了回到自己的网站了。同时也意味着我毕业也快一年了,工作也快一年了。"><a href="#转眼就是2021年4月份了,时隔快一年了回到自己的网站了。同时也意味着我毕业也快一年了,工作也快一年了。" class="headerlink" title="转眼就是2021年4月份了,时隔快一年了回到自己的网站了。同时也意味着我毕业也快一年了,工作也快一年了。"></a>转眼就是2021年4月份了,时隔快一年了回到自己的网站了。同时也意味着我毕业也快一年了,工作也快一年了。</h3></blockquote>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210413105841.png" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210415111245.jpg" alt=""></p>
<h1 id="0x00-实习"><a href="#0x00-实习" class="headerlink" title="0x00 实习"></a>0x00 实习</h1><p>在毕业之前我去了中国光电集团实习了一段时间,承接的是海外(马来西亚)的光伏发电项目,在马来待了一段时间也算耳濡目染了一下当地人的生活习性,在那边我惊喜的发现可以当地人一个专车司机居然具有种6语言:英语、普通话、粤语、马来西亚语、泰语、潮汕话等等。这样一来就开森了,我一下就有3种语言交流了(普通话、粤语、和我一般般的英语),结果后来发现不是每个人懂那么多种语言的,还是要用英语交流。。。</p>
<h1 id="0x01-毕业"><a href="#0x01-毕业" class="headerlink" title="0x01 毕业"></a>0x01 毕业</h1><p>我是2020年6月份毕业的,感觉2020年过年后像是失忆了,时间很快就过了,因为疫情的缘故我们仓促毕业。在那段时间在家毕业设计,和兄弟朋友一起网上视频聊天。</p>
<p>终于等到了5月底回学校的通知,而这一次,不是开学,而是回去收拾东西走人。而且还分多批陆陆续续地回校,er好多同学连最后一面都见不到了。</p>
<p>5月底,稀稀疏疏地回到学校,在食堂吃一顿、在宿舍串串门分享一下对未来的彷徨、找老师聊聊天、拿几套衣服拍个单人毕业照。转眼宿舍已经空空的了,我送走宿舍的“大力哥”后就仅剩我一人了,心情空落落的。</p>
<p>在和老师一一道别合影之后我来到了辅导员的房间开始办理毕业的手续,手续很快但我却希望他能慢点。很快我们都要离开了,没想到毕业竟然是如此的简单…甚至有点儿平静,和大伙告别后离开学校。</p>
<p>出校门后,四年的大学生活宣告结束,突然有很多的不舍,可是已经回不去了呀~</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210415143316.png" alt="image-20210415143313608"></p>
<p>(拍毕业照)</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210415142823.png" alt="实验室"></p>
<p>(私人办公室也被我们搬空了)</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210415143228.png" alt="光电实验室"></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210415143242.png" alt="image-20210415143239875"></p>
<p>(光电创新实验室)</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210415143336.png" alt="image-20210415143333363"></p>
<p>(宿舍门口)</p>
<p>再见!</p>
<h1 id="0x02-东阳光工作"><a href="#0x02-东阳光工作" class="headerlink" title="0x02 东阳光工作"></a>0x02 东阳光工作</h1><p>入职快一年了,我是6月底来到东阳光的研究院的,现在应该是第一次全面地接触社会,虽然在学校内已经有很多对毕业后的幻想,但是突如其来的疫情让很多计划都落空了。</p>
<p>唯一感到舒适的事情是毕业-面试-入职东阳光这个过程一帆风顺,在此之前一直没听说过东阳光集团,在一些当地的企业家、股长推荐之后才逐渐了解到位于长安上沙的东阳光的,话说在长安生活那么久都没有来过上沙社区的我也对这个地方感觉到很新奇,对未来一切都很期待。</p>
<h2 id="1-情况介绍"><a href="#1-情况介绍" class="headerlink" title="1.情况介绍"></a>1.情况介绍</h2><p>转眼就是成为东阳光的一份</p>
<p>我迫不及待地我很快就来到了公司这里,第一次先过来这边左看看又看看,很好奇的是这里的建筑物都很有特色,像~罗马建筑,大门看进去发现园区左右都是对称的👍。对比起上沙周围破破烂烂工业区的建筑物来说还是很气派的!</p>
<h3 id="1-1吃的"><a href="#1-1吃的" class="headerlink" title="1.1吃的"></a>1.1吃的</h3><p>看完环境就看了下吃的,这里有大食堂、小食堂、中餐厅、韩国料理,几乎很容易在这里找到合适的吃的东西。平时一般中午在大食堂,晚餐在小食堂、韩国料理里解决了。公司的大食堂就是我们所说的饭堂,因为很大一部分食材是公司乳源基地生产的,相当于自产自销,所以费用很低。小食堂、中餐厅、韩国料理相当于点餐的地方,接待客人或者宴请的情况来的比较多。</p>
<p>平时接触的一些诸如大疆、华为、中国移动、海康威视的企业代表会后也是带他们来这边就餐,算是对他们跋山涉水远道而来的一种感恩吧!</p>
<h3 id="1-2玩的"><a href="#1-2玩的" class="headerlink" title="1.2玩的"></a>1.2玩的</h3><p>一入职我们研究院就挑选了一些新人都参加了企业举办的年会舞蹈活动(后面有变数),我们于是乎在这个过程中认识了很多东阳光的小伙伴,后面也成为了很好的朋友。</p>
<h3 id="1-3住的"><a href="#1-3住的" class="headerlink" title="1.3住的"></a>1.3住的</h3><h2 id="2-入职之后的工作"><a href="#2-入职之后的工作" class="headerlink" title="2.入职之后的工作"></a>2.入职之后的工作</h2><p>入职之后就是一个崭新的员工,啥活都不懂,只能多多找这边的各种师兄师姐(我们是这样称呼的)学习,在此过程中大家都很热情,我也逐渐地看到了各位大佬的实力(佩服佩服是😎)</p>
<p>(未完待写)</p>
]]></content>
</entry>
<entry>
<title>感谢永远有歌,把心境道破</title>
<url>/2020/07/13/eason/</url>
<content><![CDATA[<h2 id="7-11Eason-Live-is-so-much-better-with-Music"><a href="#7-11Eason-Live-is-so-much-better-with-Music" class="headerlink" title="7.11Eason Live is so much better with Music"></a>7.11Eason Live is so much better with Music</h2><h2 id="有-了-音-乐-生-活-更-美-好-,Eason"><a href="#有-了-音-乐-生-活-更-美-好-,Eason" class="headerlink" title="有 了 音 乐 生 活 更 美 好 ,Eason."></a>有 了 音 乐 生 活 更 美 好 ,Eason.</h2><p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/MagiDrag0n/PicBed/img/Eason.jpg" alt="img"></p>
<a id="more"></a>
<p>youtube地址:</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/9KBBYv5neMk" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
> 今天最幸福的是就是跟早上起来陪陈奕迅看日出,下午陪陈奕迅看日落!
<h2 id=""><a href="#" class="headerlink" title=""></a></h2><p><img src= "/img/loading.gif" data-src="https://pics7.baidu.com/feed/6159252dd42a2834ad1f72340f993cec15cebf03.jpeg?token=f80acdd8826a5aff8f7f729a63b40631" alt="img"></p>
<p>致敬这场疫情中默默奉献的那些人</p>
]]></content>
<tags>
<tag>日常</tag>
</tags>
</entry>
<entry>
<title>git的使用firebase+github pagse同时提交博客</title>
<url>/2020/07/14/git%E7%9A%84%E4%BD%BF%E7%94%A8/</url>
<content><![CDATA[<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200714154834.png" alt=""></p>
<a id="more"></a>
<p>firebase的访问一直不稳定,所以还是在github pagse上面也同步一份博客的内容吧</p>
<h1 id="git的使用记录"><a href="#git的使用记录" class="headerlink" title="git的使用记录"></a>git的使用记录</h1><p>感谢一下<a href="https://magidrag0n.github.io/" target="_blank" rel="noopener">@magidrag0n</a>大佬的教学</p>
<ul>
<li><p>首先在你的github创立一个名叫:<code>你的github用户名</code>+ <code>github.io</code>的仓库</p>
<ul>
<li>我的github用户名叫<code>zengwenjian123</code>所以我建立的仓库名叫:<code>zengwenjian123.github.io</code></li>
</ul>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200714151120.png" alt=""></p>
</li>
<li><p>然后命令行进入到博客的文件夹</p>
<figure class="highlight sh"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> hexo </span><br><span class="line"><span class="comment">#进入博客更目录</span></span><br></pre></td></tr></table></figure>
</li>
<li><p>安装插件</p>
<figure class="highlight sh"><table><tr><td class="code"><pre><span class="line">npm install --save hexo-deployer-git</span><br></pre></td></tr></table></figure>
</li>
<li><p>打开站点配置文件<code>_config</code></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200714151700.png" alt=""></p>
<ul>
<li><p>在最下面添加一个函数</p>
<figure class="highlight sh"><table><tr><td class="code"><pre><span class="line">deploy:</span><br><span class="line"> <span class="built_in">type</span>: <span class="string">'git'</span></span><br><span class="line"> repo: <span class="string">'https://github.com/ZengWenJian123/ZengWenJian123.github.io'</span></span><br><span class="line"> branch: <span class="string">'master'</span></span><br></pre></td></tr></table></figure>
<p>这里填入的是你自己的仓库路径</p>
</li>
</ul>
</li>
</ul>
<p> <img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200714151915.png" alt=""></p>
<ul>
<li><p>运行设置账户名</p>
<figure class="highlight sh"><table><tr><td class="code"><pre><span class="line">git config --global user.email <span class="string">"you@example.com"</span></span><br><span class="line">git config --global user.name <span class="string">"Your Name"</span></span><br></pre></td></tr></table></figure>
<p>来设置您账号的缺省身份标识</p>
</li>
<li><p>然后运行hexo三连:</p>
<figure class="highlight sh"><table><tr><td class="code"><pre><span class="line">hexo cl</span><br><span class="line">hexo g</span><br><span class="line">hexo d</span><br></pre></td></tr></table></figure>
<p>提示输入username for github时输入你的github登录邮箱</p>
<p>提示输入password for github时输入你的github登录密码</p>
<p>输入指令记住账户和密码(不用每次部署的时候再次输入)</p>
<figure class="highlight sh"><table><tr><td class="code"><pre><span class="line">git config --global credential.helper store</span><br></pre></td></tr></table></figure>
</li>
<li><p>你的hexo博客下的<code>public</code>文件夹将会上传到github仓库了</p>
</li>
</ul>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200714152551.png" alt=""></p>
<ul>
<li>现在访问<a href="https://zengwenjian123.github.io/" target="_blank" rel="noopener">https://zengwenjian123.github.io/</a> 将可以访问到你的博客(说好的google firebase真香呢?)</li>
</ul>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200714152824.png" alt=""></p>
<ul>
<li>博客部署github pasges完成</li>
</ul>
<hr>
<h1 id="注意事项"><a href="#注意事项" class="headerlink" title="注意事项"></a>注意事项</h1><p>Username for ‘<a href="https://github.com'" target="_blank" rel="noopener">https://github.com'</a>: 输入的是github上的邮箱账号, 而不是github中设置的username, 这是个巨坑!!!<br>Password for ‘https://你的github邮箱@github.com’: 输入github的登录密码,点击enter键即可.</p>
<p>利用下面的代码记住账户和密码</p>
<figure class="highlight sh"><table><tr><td class="code"><pre><span class="line">git config --global credential.helper store</span><br></pre></td></tr></table></figure>
<hr>
<h1 id="自动化"><a href="#自动化" class="headerlink" title="自动化"></a>自动化</h1><p>设置短命令:</p>
<p>令在ubuntu的环境下可以使用短命令来执行一键将静态博客页面部署到github</p>
<p>首先打开你个人目录下的.bashrc隐藏文件</p>
<figure class="highlight sh"><table><tr><td class="code"><pre><span class="line">vim ~/.bashrc</span><br><span class="line"><span class="comment">#把光标移到末尾按’i’键插入一行</span></span><br><span class="line"><span class="built_in">alias</span> gdd=<span class="string">'hexo clean && hexo g && hexo d'</span></span><br></pre></td></tr></table></figure>
<blockquote>
<p>然后按’Esc’后按’:wq’保存退出</p>
</blockquote>
<ul>
<li><p>最后在终端输入命令生效刚刚的更改就完事了</p>
<figure class="highlight sh"><table><tr><td class="code"><pre><span class="line"><span class="built_in">source</span> ~/.bashrc</span><br></pre></td></tr></table></figure>
</li>
</ul>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200714153734.png" alt=""></p>
<p><code>gdd</code>(搞大点)就是短命令名,每当输入<code>gdd</code>将自动执行</p>
<figure class="highlight sh"><table><tr><td class="code"><pre><span class="line">hexo clean</span><br><span class="line">hexo g</span><br><span class="line">hexo d</span><br></pre></td></tr></table></figure>
<p>部署三连</p>
<table>
<thead>
<tr>
<th>firebase 托管</th>
<th><a href="https://usg-cn.web.app/" target="_blank" rel="noopener">https://usg-cn.web.app/</a></th>
</tr>
</thead>
<tbody><tr>
<td>github 托管</td>
<td><a href="https://zengwenjian123.github.io/" target="_blank" rel="noopener">https://zengwenjian123.github.io/</a></td>
</tr>
</tbody></table>
<p>这两个blog将会同时更新</p>
<hr>
<h1 id="git相应代码"><a href="#git相应代码" class="headerlink" title="git相应代码"></a>git相应代码</h1><figure class="highlight sh"><table><tr><td class="code"><pre><span class="line">git add README.md </span><br><span class="line">git add aruco_positioning_2D/</span><br><span class="line">git status </span><br><span class="line">git commit </span><br><span class="line">git <span class="built_in">log</span></span><br></pre></td></tr></table></figure>
]]></content>
<tags>
<tag>博客</tag>
<tag>git</tag>
<tag>hexo</tag>
</tags>
</entry>
<entry>
<title>Hexo博客收录百度和谷歌2020.7.14更新</title>
<url>/2020/07/13/google/</url>
<content><![CDATA[<p>博客已经搭建成功了一段时间了,并且添加了一些博文,不过看到博客底部的访客人数还是感觉特别寒酸,为了使博客的曝光读提高,所以就考虑主动让百度或者谷歌等搜索引擎收录。</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200713171218.png" alt=""></p>
<a id="more"></a>
<h1 id="首先确认站点是否已经被收录了"><a href="#首先确认站点是否已经被收录了" class="headerlink" title="首先确认站点是否已经被收录了"></a>首先确认站点是否已经被收录了</h1><p>我的博客地址为:<code>usg-cn.web.app</code>所以可以在百度和谷歌输入下面的格式来判断站点是否已经被收录了。</p>
<figure class="highlight"><table><tr><td class="code"><pre><span class="line">site:usg-cn.web.app/</span><br></pre></td></tr></table></figure>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200713171516.png" alt=""><br><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200713171517.png" alt=""></p>
<p>百度的没有,谷歌已经收录了</p>
<blockquote>
<p>研究了好久,总感觉百度的搜索蜘蛛效果比谷歌的差一点,新网站谷歌很快就收录了,百度要等好久。</p>
</blockquote>
<p>站点还没有被收录就继续下列步骤</p>
<h1 id="安装扩展插件"><a href="#安装扩展插件" class="headerlink" title="安装扩展插件"></a>安装扩展插件</h1><blockquote>
<p>站点地图是一种文件,您可以通过该文件列出您网站上的网页,从而将您网站内容的组织架构告知Google和其他搜索引擎。Googlebot等搜索引擎网页抓取工具会读取此文件,以便更加智能地抓取您的网站。</p>
</blockquote>
<p>在你的hexo博客根目录,用下面2个命令分别安装谷歌、百度所对应的站点地图生成文件</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">npm install hexo-generator-sitemap --save</span><br><span class="line">npm install hexo-generator-baidu-sitemap --save</span><br></pre></td></tr></table></figure>
<p>在博客目录的_config.yml中添加如下代码</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># 自动生成sitemap</span></span><br><span class="line"><span class="comment"># sitemap</span></span><br><span class="line">sitemap:</span><br><span class="line"> path: sitemap.xml</span><br><span class="line">baidusitemap:</span><br><span class="line"> path: baidusitemap.xml</span><br></pre></td></tr></table></figure>
<p>编译你的博客</p>
<figure class="highlight sh"><table><tr><td class="code"><pre><span class="line">hexo g</span><br></pre></td></tr></table></figure>
<p>然后你可以看到在你博客下的<code>public</code>目录下生成了<code>sitemap.xml</code>以及<code>baidusitemap.xml</code>文件,这样就大功告成了。<code>sitemap.xml</code>是提交给谷歌的、<code>baidusitemap.xml</code>是提交给百度的。</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200714082740.png" alt=""></p>
<p>部署后你分别访问<br><a href="https://usg-cn.web.app/sitemap.xml" target="_blank" rel="noopener">https://usg-cn.web.app/sitemap.xml</a></p>
<p><a href="https://usg-cn.web.app/baidusitemap.xml" target="_blank" rel="noopener">https://usg-cn.web.app/baidusitemap.xml</a></p>
<p>看到如下画面就证明已经成功了</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200714083125.png" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200714083205.png" alt=""></p>
<hr>
<h1 id="验证网站所有权"><a href="#验证网站所有权" class="headerlink" title="验证网站所有权"></a>验证网站所有权</h1><ul>
<li><a href="https://link.jianshu.com/?t=https://www.google.com/webmasters/tools/home?hl=zh-CN" target="_blank" rel="noopener">Google搜索引擎提交入口</a></li>
<li><a href="https://link.jianshu.com/?t=http://www.baidu.com/search/url_submit.htm" target="_blank" rel="noopener">百度搜索引擎入口</a></li>
</ul>
<blockquote>
<p><a href="https://link.jianshu.com?t=http://zhanzhang.baidu.com/college/courseinfo?id=267&page=1#h2_article_title3" target="_blank" rel="noopener">为什么要验证网站</a> <br>站长平台推荐站长添加主站(您网站的链接也许会使用www 和非 www 两种网址,建议添加用户能够真实访问到的网址),添加并验证后,可证明您是该域名的拥有者,可以快捷批量添加子站点,查看所有子站数据,无需再一一验证您的子站点。<br><a href="https://link.jianshu.com?t=http://zhanzhang.baidu.com/college/courseinfo?id=267&page=1#h2_article_title13" target="_blank" rel="noopener">如何验证网站</a><br>首先如果您的网站已使用了百度统计,您可以使用统计账号登录平台,或者绑定站长平台与百度统计账号,站长平台支持您批量导入百度统计中的站点,您不需要再对网站进行验证。<br>百度站长平台为未使用百度统计的站点提供三种验证方式:<strong>文件验证、html标签验证、CNAME验证</strong>。<br>1.文件验证:您需要下载验证文件,将文件上传至您的服务器,放置于域名根目录下。<br>2.html标签验证:将html标签添加至网站首页html代码的<head>标签与<meta name="generator" content="Hexo 4.2.1"><link rel="alternate" href="/atom.xml" title="usg的blog" type="application/atom+xml">
</head>标签之间。<br>3.CNAME验证:您需要登录域名提供商或托管服务提供商的网站,添加新的DNS记录。<br>验证完成后,我们将会认为您是网站的拥有者。为使您的网站一直保持验证通过的状态,请保留验证的文件、html标签或CNAME记录,我们会去定期检查验证记录。<br>参考链接:<a href="https://www.jianshu.com/p/5e68f78c7791来源:简书" target="_blank" rel="noopener">https://www.jianshu.com/p/5e68f78c7791来源:简书</a></p>
</blockquote>
<h2 id="百度:"><a href="#百度:" class="headerlink" title="百度:"></a>百度:</h2><p>登录百度<a href="https://ziyuan.baidu.com/linksubmit/url" target="_blank" rel="noopener">资源搜索平台</a><code>用户中心</code> > <code>站点管理</code>,点击<code>添加站点</code></p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200713172043.png" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200713172059.png" alt=""></p>
<p><code>站点领域</code>随便填一下就好,然后选择<code>文件验证</code>验证你的网站</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200713172447.png" alt=""></p>
<p>下载红框中的<code>验证文件</code>将它拷贝到<code>hexo/themes/next/source</code>文件夹下</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200713172723.png" alt=""></p>
<p>然后更新部署网站,在你的<code>博客域名</code>+<code>/验证文件名</code>看看你不能访问,例如我输入的是这个:</p>
<figure class="highlight dts"><table><tr><td class="code"><pre><span class="line"><span class="symbol">https:</span><span class="comment">//usg-cn.web.app/baidu_verify_DppfZ4udwW.html</span></span><br></pre></td></tr></table></figure>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200713173031.png" alt=""></p>
<p>就证明验证文件放的位置对了,就可以在验证百度站点了(等待10分钟即可认证完成)</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200713173131.png" alt=""></p>
<hr>
<h2 id="谷歌:"><a href="#谷歌:" class="headerlink" title="谷歌:"></a>谷歌:</h2><p>谷歌操作比较简单,就是向<a href="https://link.jianshu.com/?t=https://www.google.com/webmasters/tools" target="_blank" rel="noopener">Google站长工具</a>提交sitemap</p>
<p>登录Google账号,添加了站点验证通过后,选择添加网址前缀:<code>https://usg-cn.web.app/</code></p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200714082054.png" alt=""></p>
<p>选择站点,之后在<code>索引</code>——<code>站点地图</code>中就能看到<code>添加/测试站点地图</code>,如下图:</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200713173317.png" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200714082345.png" alt=""></p>
<p>所有都完成了</p>
<p>接着等上一段时间:我是弄完就去睡觉了,在搜索引擎输入:</p>
<figure class="highlight plain"><table><tr><td class="code"><pre><span class="line">site:usg-cn.web.app/</span><br></pre></td></tr></table></figure>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200714083449.png" alt=""></p>
<p>完成收录谷歌了~</p>
<p>百度的:一直显示抓取失败(这个问题困扰了我好久,一直没有解决)不知道是百度的问题还是我的问题,我浏览器一直可以访问到站点地图的,有知道的朋友在评论区讨论讨论!如果成功了再更新</p>
<p>未完待续~~~~</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200714083611.png" alt=""></p>
]]></content>
<tags>
<tag>博客</tag>
<tag>hexo</tag>
<tag>google</tag>
</tags>
</entry>
<entry>
<title>Hexo优雅地使用图床</title>
<url>/2020/07/10/hexo-pic/</url>
<content><![CDATA[<p>其实很早就接触了Markdown语法了,那是可以追溯到学生时代。在学校实验室的时候需要对一些新来的师弟师妹进行培训,就需要写一些教程文档,那时候就开始利用Markdown+Typora进行教程编写,但是苦于Markdown语法的特殊性使得图片的插入尤为困难,移动文档的时候往往图片会挂掉,这对于我来说就是十分不方便的,所以最后就不了了之,用回office作罢……</p>
<p>等到了工作的时候,来到东阳光刚刚入职那会师姐就发来一个用Markdown写的教程文档,我眼前一亮,居然文档可以那么优美简洁,加上学生时代早有接触我打起了重拾Markdown的信心,随后便疯狂地爱上了这种<strong>“轻量级标记语言”</strong>。</p>
<a id="more"></a>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200710101818.png" alt=""></p>
<p>那么怎么解决图片的问题呢?经过简单的查找之后我找到了<code>PicGo</code>+<code>Github(cdn)</code>的强强组合,这样的组合即简单易用又稳定可靠访问速度还比较快,不用担心图片会被删掉。</p>
<h1 id="PicGo"><a href="#PicGo" class="headerlink" title="PicGo"></a>PicGo</h1><p>PicGo的主页:<a href="https://github.com/Molunerfinn/PicGo" target="_blank" rel="noopener">https://github.com/Molunerfinn/PicGo</a></p>
<p><strong>PicGo:一个有用的快速上传图片并获取图片URL链接的工具</strong></p>
<p>PicGo本体支持如下图床:</p>
<ul>
<li><code>七牛图床</code> v1.0</li>
<li><code>腾讯云 COS v4\v5 版本</code> v1.1和v1.5.0</li>
<li><code>又拍云</code> v1.2.0</li>
<li><code>GitHub</code> v1.5.0</li>
<li><code>SM.MS V2</code> v2.3.0-beta.0</li>
<li><code>阿里云 OSS</code> v1.6.0</li>
<li><code>Imgur</code> v1.6.0</li>
</ul>
<p>PicGo的界面:</p>
<p><img src= "/img/loading.gif" data-src="https://raw.githubusercontent.com/Molunerfinn/test/master/picgo/picgo-2.0.gif" alt="img"></p>
<p><img src= "/img/loading.gif" data-src="https://user-images.githubusercontent.com/12621342/34242310-b5056510-e655-11e7-8568-60ffd4f71910.gif" alt="picgo-menubar"></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200710102357.png" alt="picgo的界面截图"></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200710102504.png" alt=""></p>
<p>利用这样的小插件的话可以让我们的Markdown文档轻易的插入图片,在线图片特别方便用于分享出去,芜湖,起飞了!</p>
<p>安装教程:<a href="https://github.com/Molunerfinn/PicGo" target="_blank" rel="noopener">https://github.com/Molunerfinn/PicGo</a></p>
<hr>
<h1 id="GitHub图床"><a href="#GitHub图床" class="headerlink" title="GitHub图床"></a>GitHub图床</h1><p>GitHub想必大家都知道吧!<code>在线程序员交友平台</code>我们就是利用这个平台作为一个稳定可靠的图片保存位置。</p>
<h2 id="1-登录GitHub创建Repository"><a href="#1-登录GitHub创建Repository" class="headerlink" title="1.登录GitHub创建Repository"></a>1.登录GitHub创建Repository</h2><p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200710103103.png" alt=""></p>
<h2 id="2-设置Repository"><a href="#2-设置Repository" class="headerlink" title="2.设置Repository"></a>2.设置Repository</h2><p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200710103303.png" alt=""></p>
<ul>
<li>设置仓库名</li>
<li>设置为Pubic(重要)如果设置为私人的话就看不到了</li>
<li>创建仓库</li>
</ul>
<h2 id="3-生成一个Token"><a href="#3-生成一个Token" class="headerlink" title="3.生成一个Token"></a>3.生成一个Token</h2><p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200710103617.png" alt=""></p>
<p>点开头像的<code>设定值</code></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200710103815.png" alt=""></p>
<p>点开左侧最下面的<code>开发人员设定</code></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200710103921.png" alt=""></p>
<p>点击个人访问令牌</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200710104043.png" alt=""></p>
<p>创建新的<code>Token</code></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200710104124.png" alt=""></p>
<p>填写描述,选择<code>repo</code>,然后点击<code>Generate token</code>按钮</p>
<blockquote>
<p>注意:这串token十分重要,记录下来只会显示一次,要好好保存,不能落入其他人手中喔!</p>
</blockquote>
<hr>
<h1 id="配置PicGo"><a href="#配置PicGo" class="headerlink" title="配置PicGo"></a>配置PicGo</h1><p><a href="https://github.com/Molunerfinn/PicGo/releases" target="_blank" rel="noopener">下载相应的版本</a></p>
<p>安装:不会装的看<a href="https://github.com/Molunerfinn/PicGo" target="_blank" rel="noopener">文档</a></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200710104609.png" alt=""></p>
<blockquote>
<p>设定仓库名的时候,是按照“账户名/仓库名的格式填写”</p>
<p>分支名统一填写“master”</p>
<p>将之前的Token黏贴在这里</p>
<p>存储的路径可以按照我这样子写,就会在repository下创建一个“img”文件夹</p>
<p>自定义域名的作用是,在上传图片后成功后,PicGo会将“自定义域名+上传的图片名”生成的访问链接,放到剪切板上<code>https://raw.githubusercontent.com/用户名/RepositoryName/分支名,</code>,自定义域名需要按照这样去填写</p>
<p>或者使用cdn加速:</p>
<p><code>https://cdn.jsdelivr.net/gh/用户名/RepositoryName</code>来进行加速</p>
</blockquote>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200710104922.png" alt=""></p>
<p>这样之后就可以愉快地在文档中加入图片了,同时你的图片也可以在github仓库中查看到</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200710142404.png" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200710142534.png" alt=""></p>
<hr>
<h1 id="码云Gitee图床"><a href="#码云Gitee图床" class="headerlink" title="码云Gitee图床"></a>码云Gitee图床</h1><p>教程:<a href="https://cychan811.gitee.io/cychan811/2020/07/04/PicGo-gitee%E6%90%AD%E5%BB%BA%E4%B8%AA%E4%BA%BA%E5%85%8D%E8%B4%B9%E5%9B%BE%E5%BA%8A/" target="_blank" rel="noopener">地址</a></p>
<p>码云官网:<a href="https://gitee.com/" target="_blank" rel="noopener">地址</a></p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200713212031.png" alt=""></p>
<p>速度比GitHub更快而且还不会时不时出现无法提交的小bug,但是怎么说呢?码云gitee毕竟是国内的平台,论体量来说比GitHub小很多,也不排除什么时候突然就停止服务或者关闭api了,所以说这个还是用用就行了。<strong>不过速度是真的快,毕竟是本土化的服务器</strong></p>
<h1 id="一些错误处理"><a href="#一些错误处理" class="headerlink" title="一些错误处理"></a>一些错误处理</h1><ul>
<li>上传失败:首先检查上传或者剪贴板的东东是否是支持格式的图片,如果确认无误的话可能是PicGo的问题进入<code>PicGo设置</code> <code>设置Server</code>关闭再打开,如果还是不行重启软件可以解决大部分问题。</li>
<li>最好打开<code>上传前重命名</code>功能,因为有时候可能因为某些图片是中文的,导致错误,打开后可以把图片设置为数字时间格式,这样方便很多。</li>
<li>剩下的遇到再补充</li>
</ul>
]]></content>
<tags>
<tag>hexo</tag>
<tag>next</tag>
<tag>图床</tag>
<tag>markdonw</tag>
</tags>
</entry>
<entry>
<title>AD再见</title>
<url>/2021/04/21/AD/</url>
<content><![CDATA[<h1 id="AdGuardHome神器"><a href="#AdGuardHome神器" class="headerlink" title="AdGuardHome神器"></a>AdGuardHome神器</h1><p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421152731.png" alt="image-20210421152729678"></p>
<p>最近拿到了NanoPi R4S 开发板作为一个软路由,以前也是陆陆续续学习了搭建openwrt的一些方法,正好拿到了新的硬件,正好来研究一下。最近发现网络中的广告越来越多了,所以就想利用openwrt里部署的AdGuard Home作为一个dns网关,帮我们去除网络中烦人的“牛皮癣”。</p>
<p>NanoPi R4S 开发板的全名是高性能边缘计算路由器R4S,这款开发板具有2个千兆以太网网口,一个是SoC引出,另外一个是由PCle转接。</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142843.jpg" alt="img"></p>
<p>SoC用的是RK3399作为主控,主频1.8GHz,板载1G或者4G 内存,2个USB3.0接口可接USB WIFI、储存设备,保证了性能和扩展性。</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142936.jpg" alt="img"></p>
<p>官方开发文档在这:(<a href="http://wiki.friendlyarm.com/wiki/index.php/NanoPi_R4S" target="_blank" rel="noopener">链接</a>)</p>
<p>装好openwrt系统就可以开始了(<a href="https://github.com/QiuSimons/R2S-R4S-X86-OpenWrt" target="_blank" rel="noopener">系统链接</a>)</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142853.png" alt="image-20210421093019026"></p>
<p>在系统–软件包–安装好<a href="https://github.com/AdguardTeam/AdGuardHome" target="_blank" rel="noopener">AdGuardHome</a>就可以进行下一步的配置了</p>
<p>AdGuardHome的官方<a href="https://adguard.com/zh_cn/adguard-home/overview.html" target="_blank" rel="noopener">主页</a></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142900.png" alt="image-20210421094757658"></p>
<p>以下是AdGuardHome的描述</p>
<blockquote>
<h3 id="AdGuardHome您和您的设备的隐私保护中心"><a href="#AdGuardHome您和您的设备的隐私保护中心" class="headerlink" title="AdGuardHome您和您的设备的隐私保护中心"></a>AdGuardHome您和您的设备的隐私保护中心</h3><p>免费和开源,功能强大的全网络广告和跟踪器阻止了DNS服务器。</p>
<p>AdGuard Home是用于阻止广告和跟踪的全网络软件。设置完成后,它将涵盖您所有的家用设备,并且您不需要任何客户端软件。</p>
<p>它作为DNS服务器运行,将跟踪域重新路由到“黑洞”,从而防止您的设备连接到这些服务器。它基于我们用于公共<a href="https://adguard.com/en/adguard-dns/overview.html" target="_blank" rel="noopener">AdGuard DNS</a>服务器的软件-两者共享许多通用代码。</p>
</blockquote>
<p>重的来说AdGuardHome的优点:</p>
<ol>
<li>局域网一次部署,所有客户端都生效</li>
<li>无需手机端ad排除软件的耗电和降低性能</li>
<li>可以和其他代理共存</li>
<li>高性能,RK3399性能不是盖的</li>
</ol>
<p>好的 废话不多说开始安装</p>
<h1 id="安装"><a href="#安装" class="headerlink" title="安装"></a>安装</h1><p>点击更新,第一次更新会安装AdGuardHome所需要的引擎核心,现在等它更新好就行了。</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142907.png" alt="image-20210421095710947"></p>
<p>更新好了启动AdGuardHome,点击保存点击那个绿色的图标或者访问你的管理ip加上:3000端口就可以访问到AdGuardHome的后台了。</p>
<p>点击【开始配置】</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142913.png" alt="image-20210421100510762"></p>
<ol>
<li>网页后台管理界面默认是80端口,可以改为3000端口,这样我们在刚刚那个openwrt界面点击按钮就可以进入了。</li>
<li>DNS服务器端口默认是53但是已经被占用了,现在改为533(要记得喔,下面会用上的)。</li>
<li>点击下一步</li>
</ol>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142937.png" alt="image-20210421100938556"></p>
<p>创建登录账户和密码 </p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142938.png" alt="image-20210421102223064"></p>
<p>点击下一步</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142939.png" alt="image-20210421102259191"></p>
<p>到这就安装完成了,接下来可以进行配置了</p>
<h1 id="配置"><a href="#配置" class="headerlink" title="配置"></a>配置</h1><p>接着回到openwrt的界面会发现状态显示<strong><em>AdGuardHome 运行中***</em></strong>未重定向***,这里是因为还没将你的dns服务器重定向到AdGuardHome核心。</p>
<h2 id="openwrt设置"><a href="#openwrt设置" class="headerlink" title="openwrt设置"></a>openwrt设置</h2><p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142940.png" alt="image-20210421102703249"></p>
<ol>
<li>点击533重定向复选框</li>
<li>选择重定向53端口到AdGuardHome</li>
<li>保存应用</li>
</ol>
<p>就会发现状态提示<strong><em>AdGuardHome 运行中***</em></strong>已重定向***</p>
<p>这样就完成重定向了。</p>
<h2 id="AdGuardHome-DNS设置"><a href="#AdGuardHome-DNS设置" class="headerlink" title="AdGuardHome-DNS设置"></a>AdGuardHome-DNS设置</h2><p>用你刚刚设置好的账户和密码登录后台比如说我的AdGuardHome后台就是<a href="http://192.168.1.1:3000,登录完成点击设置--DNS设置">http://192.168.1.1:3000,登录完成点击设置--DNS设置</a></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142941.png" alt="image-20210421102418299"></p>
<ol>
<li>复制下方DNS列表到在上游DNS服务器文本框</li>
<li>填写Bootstrap DNS 服务器列表</li>
<li>选择并行请求</li>
<li>应用</li>
<li>测试上游DNS</li>
</ol>
<figure class="highlight dts"><table><tr><td class="code"><pre><span class="line">上游DNS服务器列表</span><br><span class="line"></span><br><span class="line"><span class="number">119.29</span><span class="number">.29</span><span class="number">.29</span></span><br><span class="line"><span class="number">1.2</span><span class="number">.4</span><span class="number">.8</span></span><br><span class="line"><span class="number">101.226</span><span class="number">.4</span><span class="number">.6</span></span><br><span class="line"><span class="symbol">tcp:</span><span class="comment">//114.114.114.114</span></span><br><span class="line"><span class="symbol">tcp:</span><span class="comment">//223.5.5.5</span></span><br><span class="line"><span class="symbol">tcp:</span><span class="comment">//223.6.6.6</span></span><br><span class="line"><span class="symbol">tcp:</span><span class="comment">//8.8.4.4</span></span><br><span class="line"><span class="symbol">tcp:</span><span class="comment">//202.14.67.4</span></span><br><span class="line"><span class="symbol">tcp:</span><span class="comment">//202.14.67.14</span></span><br><span class="line"><span class="symbol">tcp:</span><span class="comment">//202.130.97.65</span></span><br><span class="line"><span class="symbol">tcp:</span><span class="comment">//202.130.97.66</span></span><br><span class="line"><span class="symbol">tcp:</span><span class="comment">//168.95.192.1</span></span><br><span class="line"><span class="symbol">https:</span><span class="comment">//1.1.1.1/dns-query</span></span><br><span class="line"><span class="symbol">https:</span><span class="comment">//1.0.0.1/dns-query</span></span><br><span class="line"><span class="symbol">tls:</span><span class="comment">//8.8.8.8</span></span><br><span class="line"><span class="symbol">tls:</span><span class="comment">//8.8.4.4</span></span><br><span class="line"><span class="symbol">tls:</span><span class="comment">//dns.google:853</span></span><br></pre></td></tr></table></figure>
<figure class="highlight accesslog"><table><tr><td class="code"><pre><span class="line">Bootstrap DNS 服务器列表</span><br><span class="line"></span><br><span class="line"><span class="number">219.141.136.10</span>(北京电信)</span><br><span class="line"><span class="number">219.141.140.10</span>(北京电信)</span><br><span class="line"><span class="number">202.96.199.133</span>(上海电信)</span><br><span class="line"><span class="number">119.29.29.29</span></span><br><span class="line"><span class="number">223.5.5.5</span></span><br><span class="line"><span class="number">180.76.76.76</span></span><br><span class="line"><span class="number">8.8.8.8</span></span><br><span class="line"><span class="number">8.8.4.4</span></span><br><span class="line"><span class="number">208.67.222.222</span></span><br></pre></td></tr></table></figure>
<p>看到提示的指定的DNS测试通过就行了,不行的话再根据提示的某个服务器去掉就行了。例如提示1.1.1.1这个服务器不通过的话就在上面的DNS服务器文本框中去掉就行了。</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142942.png" alt="image-20210421103706416"></p>
<p><strong>Bootstrap DNS 服务器列表</strong>可以参考你openwrt概览那你的网络状况的本地DNS情况填入。像我比较懒的,就直接填写宽带拨号获取的DNS了</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142943.png" alt="image-20210421104556157"></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142944.png" alt="image-20210421104724173"></p>
<p>完成这步,DNS服务器就算设置成功了,接下来趋势设置AdGuardHome的重头戏:广告过滤功能!</p>
<h2 id="AdGuardHome–广告过滤"><a href="#AdGuardHome–广告过滤" class="headerlink" title="AdGuardHome–广告过滤"></a>AdGuardHome–广告过滤</h2><p>点击 过滤器–DNS封锁清单添加下方合适的规则并将对应规则打钩</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142945.png" alt="image-20210421104231396"></p>
<ol>
<li><p>点击添加阻止的列表–添加一个自定义列表</p>
</li>
<li><p>填入</p>
<figure class="highlight awk"><table><tr><td class="code"><pre><span class="line">名称:ADGuard规则</span><br><span class="line">URL:https:<span class="regexp">//</span>raw.githubusercontent.com<span class="regexp">/privacy-protection-tools/</span>anti-AD<span class="regexp">/master/</span>anti-ad-easylist.txt</span><br></pre></td></tr></table></figure>
</li>
<li><p>不需要填写入多,如果需要的话还有这些,已经够用了</p>
<figure class="highlight elixir"><table><tr><td class="code"><pre><span class="line">AdAway,<span class="symbol">https:</span>/<span class="regexp">/adaway.org/hosts</span>.txt</span><br><span class="line">乘风 视频,<span class="symbol">https:</span>/<span class="regexp">/gitee.com/xinggsf</span><span class="regexp">/Adblock-Rule/raw</span><span class="regexp">/master/mv</span>.txt</span><br><span class="line">乘风 广告,<span class="symbol">https:</span>/<span class="regexp">/gitee.com/xinggsf</span><span class="regexp">/Adblock-Rule/raw</span><span class="regexp">/master/rule</span>.txt</span><br><span class="line">My AdFilters,<span class="symbol">https:</span>/<span class="regexp">/gitee.com/halflife</span><span class="regexp">/list/raw</span><span class="regexp">/master/ad</span>.txt</span><br><span class="line">隐私相关</span><br><span class="line">CJX<span class="string">'s uBlock list,https://gitee.com/cjx82630/cjxlist/raw/master/cjx-ublock.txt</span></span><br><span class="line"><span class="string">EasyPrivacy,https://easylist-downloads.adblockplus.org/easyprivacy.txt</span></span><br><span class="line"><span class="string">I don'</span>t care about cookies,<span class="symbol">https:</span>/<span class="regexp">/www.i-dont-care-about-cookies.eu/abp</span><span class="regexp">/</span></span><br></pre></td></tr></table></figure>
</li>
</ol>
<ol start="4">
<li>保存–打勾启用</li>
</ol>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142946.png" alt="image-20210421105326939"></p>
<p>这样就完成了ADGuard的设置了。</p>
<p>接下来回到openwrt设置DNS转发(最后一部分了别急)</p>
<h3 id="openwrt-DNS转发"><a href="#openwrt-DNS转发" class="headerlink" title="openwrt DNS转发"></a>openwrt DNS转发</h3><ol>
<li>网络——DHCP/DNS——服务器设置——基本设置</li>
<li>找到【DNS转发】点击添加2个,把之前设置的那个ADGuard服务器地址和端口号填上去(之前设置的是192.168.2.1:533和127.0.0.1:533)ps:这里要把端口的 ’:‘ 改为 ‘#’ 填进去</li>
<li>保存应用</li>
</ol>
<p>这样就完成了所有配置的过程了。可以愉快的上网了。</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142947.png" alt="image-20210421110031298"></p>
<h1 id="总结"><a href="#总结" class="headerlink" title="总结"></a>总结</h1><p>配置完之后就可以回到AdGuardHome的仪表盘了。可以直观地看到拦截的效果了。</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20210421142948.png" alt="image-20210421142831927"></p>
<p>可以见到,局域网广告防护已经开始,并且生效了!</p>
<p>enjoy!</p>
]]></content>
<tags>
<tag>openwrt</tag>
<tag>AD</tag>
<tag>luci</tag>
</tags>
</entry>
<entry>
<title>学习matplotlib-python数据可视化</title>
<url>/2020/08/17/learn-matplotlib/</url>
<content><![CDATA[<h1 id="Matplotlib"><a href="#Matplotlib" class="headerlink" title="Matplotlib"></a>Matplotlib</h1><h2 id="介绍"><a href="#介绍" class="headerlink" title="介绍"></a>介绍</h2><p><code>matplotlib</code>是<a href="https://zh.wikipedia.org/wiki/Python" target="_blank" rel="noopener">Python</a>编程语言及其数值数学扩展包 <a href="https://zh.wikipedia.org/wiki/NumPy" target="_blank" rel="noopener">NumPy</a>的可视化操作界面。它利用通用的<a href="https://zh.wikipedia.org/wiki/部件工具箱" target="_blank" rel="noopener">图形用户界面工具包</a>,如Tkinter, wxPython, <a href="https://zh.wikipedia.org/wiki/Qt" target="_blank" rel="noopener">Qt</a>或<a href="https://zh.wikipedia.org/wiki/GTK%2B" target="_blank" rel="noopener">GTK+</a>,向应用程序嵌入式绘图提供了<a href="https://zh.wikipedia.org/wiki/应用程序接口" target="_blank" rel="noopener">应用程序接口</a>(API)。此外,matplotlib还有一个基于图像处理库(如开放图形库OpenGL)的pylab接口,其设计与<a href="https://zh.wikipedia.org/wiki/MATLAB" target="_blank" rel="noopener">MATLAB</a>非常类似–尽管并不怎么好用<a href="https://zh.wikipedia.org/wiki/Wikipedia:列明来源" target="_blank" rel="noopener">[来源请求]</a>。SciPy就是用matplotlib进行图形绘制。</p>
<p>matplotlib最初由John D. Hunter撰写,它拥有一个活跃的开发社区,并且根据BSD样式许可证分发。 在John D. Hunter2012年去世前不久,Michael Droettboom被提名为matplotlib的主要开发者。</p>
<p>截至到2015年10月30日,matplotlib 1.5.x支持Python 2.7到3.5版本。Matplotlib 1.2是第一个支持Python 3.x的版本。Matplotlib 1.4是支持Python 2.6的最后一个版本。</p>
<blockquote>
<p>Matplotlib 可能是 Python 2D-绘图领域使用最广泛的套件。它能让使用者很轻松地将数据图形化,并且提供多样化的输出格式。这里将会探索 matplotlib 的常见用法。</p>
</blockquote>
<h2 id="安装"><a href="#安装" class="headerlink" title="安装"></a>安装</h2><p>Matplotlib及其依赖项可作为轮包用于macOS,Windows和Linux发行版:</p>
<figure class="highlight cmd"><table><tr><td class="code"><pre><span class="line">python -m pip install -U pip</span><br><span class="line">python -m pip install -U matplotlib</span><br></pre></td></tr></table></figure>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200817110828.png" alt=""></p>
]]></content>
<tags>
<tag>python</tag>
<tag>matplotlib</tag>
</tags>
</entry>
<entry>
<title>python利用opencv进行相机标定(完全版)</title>
<url>/2020/07/20/opencv-biaoding/</url>
<content><![CDATA[<p>今天的低价单孔摄像机(照相机)会给图像带来很多畸变。畸变主要有两种:径向畸变和切想畸变。如下图所示,用红色直线将棋盘的两个边标注出来,但是你会发现棋盘的边界并不和红线重合。所有我们认为应该是直线的也都凸出来了。</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200720160250.png" alt=""></p>
<p>在 3D 相关应用中,必须要先校正这些畸变。为了找到这些纠正参数,我们必须要提供一些包含明显图案模式的样本图片(比如说棋盘)。我们可以在上面找到一些特殊点(如棋盘的四个角点)。我们起到这些特殊点在图片中的位置以及它们的真是位置。有了这些信息,我们就可以使用数学方法求解畸变系数。这就是整个故事的摘要了。为了得到更好的结果,我们至少需要 10 个这样的图<br>案模式。</p>
<h1 id="实现步骤"><a href="#实现步骤" class="headerlink" title="实现步骤"></a>实现步骤</h1><h2 id="拍摄棋盘图"><a href="#拍摄棋盘图" class="headerlink" title="拍摄棋盘图"></a>拍摄棋盘图</h2><p>首先打印下图:<a href="http://120.79.182.159:8000/f/9ad20d5debfb4aa68898/?dl=1" target="_blank" rel="noopener">下载</a> 也可直接保存</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200720160635.png" alt=""></p>
<p>将其固定到一个平面上,使用相机从不同角度,不同位置拍摄(10-20)张标定图。类似这样的:</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200720160922.png" alt=""></p>
<p>python调用opencv相机拍照代码(例):</p>
<figure class="highlight python"><table><tr><td class="code"><pre><span class="line"><span class="keyword">import</span> cv2</span><br><span class="line">camera=cv2.VideoCapture(<span class="number">0</span>)</span><br><span class="line">i = <span class="number">0</span></span><br><span class="line"><span class="keyword">while</span> <span class="number">1</span>:</span><br><span class="line"> (grabbed, img) = camera.read()</span><br><span class="line"> cv2.imshow(<span class="string">'img'</span>,img)</span><br><span class="line"> <span class="keyword">if</span> cv2.waitKey(<span class="number">1</span>) & <span class="number">0xFF</span> == ord(<span class="string">'j'</span>): <span class="comment"># 按j保存一张图片</span></span><br><span class="line"> i += <span class="number">1</span></span><br><span class="line"> u = str(i)</span><br><span class="line"> firename=str(<span class="string">'./img'</span>+u+<span class="string">'.jpg'</span>)</span><br><span class="line"> cv2.imwrite(firename, img)</span><br><span class="line"> print(<span class="string">'写入:'</span>,firename)</span><br><span class="line"> <span class="keyword">if</span> cv2.waitKey(<span class="number">1</span>) & <span class="number">0xFF</span> == ord(<span class="string">'q'</span>):</span><br><span class="line"> <span class="keyword">break</span></span><br></pre></td></tr></table></figure>
<p>按<code>j</code>拍摄图片,将会按照顺序批量保存,按<code>q</code>退出程序。</p>
<hr>
<h2 id="寻找棋盘图并且标定-检视标定后结果"><a href="#寻找棋盘图并且标定-检视标定后结果" class="headerlink" title="寻找棋盘图并且标定+检视标定后结果"></a>寻找棋盘图并且标定+检视标定后结果</h2><h3 id="利用opencv寻找棋盘"><a href="#利用opencv寻找棋盘" class="headerlink" title="利用opencv寻找棋盘"></a>利用opencv寻找棋盘</h3><p>为了找到棋盘的图案,我们要使用函数 cv2.findChessboardCorners()。我们还需要传入图案的类型,比如说 8x8 的格子或 5x5 的格子等。在本例中我们使用的9×6 的格子。(通常情况下棋盘都是 8x8 或者 7x7)。它会返回角点,如果得到图像的话返回值类型(Retval)就会是 True。这些角点会按顺序排列(从左到右,从上到下)</p>
<blockquote>
<p>这个函数可能不会找出所有图像中应有的图案。所以一个好的方法是编写代码,启动摄像机并在每一帧中检查是否有应有的图案。在我们获得图案之后我们要找到角点并把它们保存成一个列表。在读取下一帧图像之前要设置一定的间隔,这样我们就有足够的时间调整棋盘的方向。继续这个过程直到我们得到足够多好的图案。就算是我们举得这个例子,在所有的 14 幅图像中也不知道有几幅是好的。所以我们要读取每一张图像从其中找到好的能用的。</p>
</blockquote>
<blockquote>
<p>除 了 使 用 棋 盘 之 外, 我 们 还 可 以 使 用 环 形 格 子, 但 是 要 使 用 函 数<br>cv2.findCirclesGrid() 来找图案。据说使用环形格子只需要很少的图像<br>就可以了。</p>
</blockquote>
<p>在找到这些角点之后我们可以使用函数 cv2.cornerSubPix() 增加准确度。我们使用函数 cv2.drawChessboardCorners() 绘制图案。所有的这些步骤都被包含在下面的代码中了:</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200720161939.png" alt=""></p>
<h3 id="标定"><a href="#标定" class="headerlink" title="标定"></a>标定</h3><p>在得到了这些对象点和图像点之后,我们已经准备好来做摄像机标定了。我们要使用的函数是 cv2.calibrateCamera()。它会返回摄像机矩阵,畸变系数,旋转和变换向量等。</p>
<h3 id="畸变矫正"><a href="#畸变矫正" class="headerlink" title="畸变矫正"></a>畸变矫正</h3><p>现在我们找到我们想要的东西了,我们可以找到一幅图像来对他进行校正了。OpenCV 提供了两种方法,我们都学习一下。不过在那之前我们可以使用从函数 cv2.getOptimalNewCameraMatrix() 得到的自由缩放系数对摄像机矩阵进行优化。如果缩放系数 alpha = 0,返回的非畸变图像会带有最少量的不想要的像素。它甚至有可能在图像角点去除一些像素。如果 alpha = 1,所有的像素都会被返回,还有一些黑图像。它还会返回一个 ROI 图像,我们可以用来对结果进行裁剪。</p>
<p>函数:cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),<code>1</code>,(w,h))中参数<code>1</code>是个坑,</p>
<p>官方文档给的参数是<code>1</code>但是标定后的结果是一个球形的视角,我查了好久资料最后咨询了大佬才发现这个坑</p>
<p>这里我们使用cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),<code>0</code>,(w,h))参数设置为<code>0</code></p>
<h3 id="畸变到非畸变"><a href="#畸变到非畸变" class="headerlink" title="畸变到非畸变"></a>畸变到非畸变</h3><p>下面代码中</p>
<ul>
<li><p>dst1图像使用的是 cv2.undistort() 这是最简单的方法。只需使用这个函数和上边得到的 ROI 对结果进行裁剪</p>
</li>
<li><p>dst2图像使用的是remapping 这应该属于“曲线救国”了。首先我们要找到从畸变图像到非畸变图像的映射方程。再使用重映射方程。(代码中有详细用法)</p>
</li>
</ul>
<p>两种效果可以自行对比看看</p>
<p>纠正前后对比:</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200720163140.png" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200720163139.png" alt=""></p>
<h3 id="反向投影误差"><a href="#反向投影误差" class="headerlink" title="反向投影误差"></a>反向投影误差</h3><p>我们可以利用反向投影误差对我们找到的参数的准确性进行估计。得到的结果越接近 0 越好。有了内部参数,畸变参数和旋转变换矩阵,我们就可以使用 cv2.projectPoints() 将对象点转换到图像点。然后就可以计算变换得到图像与角点检测算法的绝对差了。然后我们计算所有标定图像的误差平均值。(但是本文不需要,所以没有将其写入)</p>
<h2 id="主要代码"><a href="#主要代码" class="headerlink" title="主要代码"></a>主要代码</h2><p>需要的库:<code>opencv-python</code> <code>numpy</code> <code>glob</code> </p>
<figure class="highlight python"><table><tr><td class="code"><pre><span class="line"><span class="keyword">import</span> cv2</span><br><span class="line"><span class="keyword">import</span> numpy <span class="keyword">as</span> np</span><br><span class="line"><span class="keyword">import</span> glob</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment"># 找棋盘格角点</span></span><br><span class="line"><span class="comment"># 设置寻找亚像素角点的参数,采用的停止准则是最大循环次数30和最大误差容限0.001</span></span><br><span class="line">criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, <span class="number">30</span>, <span class="number">0.001</span>) <span class="comment"># 阈值</span></span><br><span class="line"><span class="comment">#棋盘格模板规格</span></span><br><span class="line">w = <span class="number">9</span> <span class="comment"># 10 - 1</span></span><br><span class="line">h = <span class="number">6</span> <span class="comment"># 7 - 1</span></span><br><span class="line"><span class="comment"># 世界坐标系中的棋盘格点,例如(0,0,0), (1,0,0), (2,0,0) ....,(8,5,0),去掉Z坐标,记为二维矩阵</span></span><br><span class="line">objp = np.zeros((w*h,<span class="number">3</span>), np.float32)</span><br><span class="line">objp[:,:<span class="number">2</span>] = np.mgrid[<span class="number">0</span>:w,<span class="number">0</span>:h].T.reshape(<span class="number">-1</span>,<span class="number">2</span>)</span><br><span class="line">objp = objp*<span class="number">18.1</span> <span class="comment"># 18.1 mm</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># 储存棋盘格角点的世界坐标和图像坐标对</span></span><br><span class="line">objpoints = [] <span class="comment"># 在世界坐标系中的三维点</span></span><br><span class="line">imgpoints = [] <span class="comment"># 在图像平面的二维点</span></span><br><span class="line"><span class="comment">#加载pic文件夹下所有的jpg图像</span></span><br><span class="line">images = glob.glob(<span class="string">'./*.jpg'</span>) <span class="comment"># 拍摄的十几张棋盘图片所在目录</span></span><br><span class="line"></span><br><span class="line">i=<span class="number">0</span></span><br><span class="line"><span class="keyword">for</span> fname <span class="keyword">in</span> images:</span><br><span class="line"></span><br><span class="line"> img = cv2.imread(fname)</span><br><span class="line"> <span class="comment"># 获取画面中心点</span></span><br><span class="line"> <span class="comment">#获取图像的长宽</span></span><br><span class="line"> h1, w1 = img.shape[<span class="number">0</span>], img.shape[<span class="number">1</span>]</span><br><span class="line"> gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)</span><br><span class="line"> u, v = img.shape[:<span class="number">2</span>]</span><br><span class="line"> <span class="comment"># 找到棋盘格角点</span></span><br><span class="line"> ret, corners = cv2.findChessboardCorners(gray, (w,h),<span class="literal">None</span>)</span><br><span class="line"> <span class="comment"># 如果找到足够点对,将其存储起来</span></span><br><span class="line"> <span class="keyword">if</span> ret == <span class="literal">True</span>:</span><br><span class="line"> print(<span class="string">"i:"</span>, i)</span><br><span class="line"> i = i+<span class="number">1</span></span><br><span class="line"> <span class="comment"># 在原角点的基础上寻找亚像素角点</span></span><br><span class="line"> cv2.cornerSubPix(gray,corners,(<span class="number">11</span>,<span class="number">11</span>),(<span class="number">-1</span>,<span class="number">-1</span>),criteria)</span><br><span class="line"> <span class="comment">#追加进入世界三维点和平面二维点中</span></span><br><span class="line"> objpoints.append(objp)</span><br><span class="line"> imgpoints.append(corners)</span><br><span class="line"> <span class="comment"># 将角点在图像上显示</span></span><br><span class="line"> cv2.drawChessboardCorners(img, (w,h), corners, ret)</span><br><span class="line"> cv2.namedWindow(<span class="string">'findCorners'</span>, cv2.WINDOW_NORMAL)</span><br><span class="line"> cv2.resizeWindow(<span class="string">'findCorners'</span>, <span class="number">640</span>, <span class="number">480</span>)</span><br><span class="line"> cv2.imshow(<span class="string">'findCorners'</span>,img)</span><br><span class="line"> cv2.waitKey(<span class="number">200</span>)</span><br><span class="line">cv2.destroyAllWindows()</span><br><span class="line"><span class="comment">#%% 标定</span></span><br><span class="line">print(<span class="string">'正在计算'</span>)</span><br><span class="line"><span class="comment">#标定</span></span><br><span class="line">ret, mtx, dist, rvecs, tvecs = \</span><br><span class="line"> cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::<span class="number">-1</span>], <span class="literal">None</span>, <span class="literal">None</span>)</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">print(<span class="string">"ret:"</span>,ret )</span><br><span class="line">print(<span class="string">"mtx:\n"</span>,mtx) <span class="comment"># 内参数矩阵</span></span><br><span class="line">print(<span class="string">"dist畸变值:\n"</span>,dist ) <span class="comment"># 畸变系数 distortion cofficients = (k_1,k_2,p_1,p_2,k_3)</span></span><br><span class="line">print(<span class="string">"rvecs旋转(向量)外参:\n"</span>,rvecs) <span class="comment"># 旋转向量 # 外参数</span></span><br><span class="line">print(<span class="string">"tvecs平移(向量)外参:\n"</span>,tvecs ) <span class="comment"># 平移向量 # 外参数</span></span><br><span class="line">newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx, dist, (u, v), <span class="number">0</span>, (u, v))</span><br><span class="line">print(<span class="string">'newcameramtx外参'</span>,newcameramtx)</span><br><span class="line"><span class="comment">#打开摄像机</span></span><br><span class="line">camera=cv2.VideoCapture(<span class="number">0</span>)</span><br><span class="line"><span class="keyword">while</span> <span class="literal">True</span>:</span><br><span class="line"> (grabbed,frame)=camera.read()</span><br><span class="line"> h1, w1 = frame.shape[:<span class="number">2</span>]</span><br><span class="line"> newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx, dist, (u, v), <span class="number">0</span>, (u, v))</span><br><span class="line"> <span class="comment"># 纠正畸变</span></span><br><span class="line"> dst1 = cv2.undistort(frame, mtx, dist, <span class="literal">None</span>, newcameramtx)</span><br><span class="line"> <span class="comment">#dst2 = cv2.undistort(frame, mtx, dist, None, newcameramtx)</span></span><br><span class="line"> mapx,mapy=cv2.initUndistortRectifyMap(mtx,dist,<span class="literal">None</span>,newcameramtx,(w1,h1),<span class="number">5</span>)</span><br><span class="line"> dst2=cv2.remap(frame,mapx,mapy,cv2.INTER_LINEAR)</span><br><span class="line"> <span class="comment"># 裁剪图像,输出纠正畸变以后的图片</span></span><br><span class="line"> x, y, w1, h1 = roi</span><br><span class="line"> dst1 = dst1[y:y + h1, x:x + w1]</span><br><span class="line"></span><br><span class="line"> <span class="comment">#cv2.imshow('frame',dst2)</span></span><br><span class="line"> <span class="comment">#cv2.imshow('dst1',dst1)</span></span><br><span class="line"> cv2.imshow(<span class="string">'dst2'</span>, dst2)</span><br><span class="line"> <span class="keyword">if</span> cv2.waitKey(<span class="number">1</span>) & <span class="number">0xFF</span> == ord(<span class="string">'q'</span>): <span class="comment"># 按q保存一张图片</span></span><br><span class="line"> cv2.imwrite(<span class="string">"../u4/frame.jpg"</span>, dst1)</span><br><span class="line"> <span class="keyword">break</span></span><br><span class="line"></span><br><span class="line">camera.release()</span><br><span class="line">cv2.destroyAllWindows()</span><br></pre></td></tr></table></figure>
<p>代码放到图片相同的文件夹直接运行即可</p>
<h1 id="效果对比"><a href="#效果对比" class="headerlink" title="效果对比"></a>效果对比</h1><p>纠正前后:</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200720163313.png" alt=""><br><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200720163314.png" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200720163140.png" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200720163139.png" alt=""></p>
<p>相机标定完成~</p>
<div class="note success">
<p>success 标定完成的参数:</p>
</div>
<div class="hide-toggle" ><div class="hide-button toggle-title" style=""><i class="fas fa-caret-right fa-fw"></i><span>手头上这个相机镜头标定参数</span></div>
<div class="hide-content"><p>每个相机摄像头的情况都不同按需使用</p><p>dist=np.array(([[-0.58650416 , 0.59103816, -0.00443272 , 0.00357844 ,-0.27203275]]))<br>newcameramtx=np.array([[189.076828 , 0. , 361.20126638]<br> ,[ 0 ,2.01627296e+04 ,4.52759577e+02]<br> ,[0, 0, 1]])<br>mtx=np.array([[398.12724231 , 0. , 304.35638757],<br> [ 0. , 345.38259888, 282.49861858],<br> [ 0., 0., 1. ]])<br>ret: 1.2796736596876943<br>rvecs旋转(向量)外参:<br> [array([[-0.1273159 ],<br> [ 0.14990368],<br> [-0.03444583]]), array([[-0.09406134],<br> [ 0.00311094],<br> [ 0.03877124]]), array([[ 0.46123299],<br> [ 0.13606529],<br> [-0.10644641]]), array([[-0.21371843],<br> [ 0.19346393],<br> [ 0.05795452]]), array([[-0.06136152],<br> [-0.05609094],<br> [-0.10779057]]), array([[-0.12671277],<br> [ 0.19181691],<br> [-0.01144501]]), array([[-0.10065723],<br> [ 0.11067488],<br> [-0.00420227]]), array([[-0.25254906],<br> [ 0.05724545],<br> [ 0.06326385]]), array([[ 0.06929893],<br> [ 0.16462152],<br> [-0.09935668]]), array([[ 0.32955811],<br> [ 0.22348145],<br> [-0.08321155]]), array([[ 0.0963841 ],<br> [-0.05720288],<br> [ 0.00220535]]), array([[-0.0885636 ],<br> [-0.03092561],<br> [ 0.03529275]]), array([[ 0.03313787],<br> [-0.05300994],<br> [-0.03433814]]), array([[-0.22302867],<br> [ 0.18819738],<br> [-0.03371187]]), array([[-0.19460224],<br> [ 0.1036492 ],<br> [ 0.03301566]]), array([[-0.27115415],<br> [ 0.18957621],<br> [-0.04709229]]), array([[-0.12627705],<br> [ 0.0753438 ],<br> [ 0.0761791 ]]), array([[ 0.15356268],<br> [-0.02614756],<br> [ 0.02406217]]), array([[ 0.69316168],<br> [ 0.19622708],<br> [-0.18706069]]), array([[-0.09555645],<br> [ 0.02551495],<br> [ 0.02218898]]), array([[-0.08255654],<br> [-0.07209258],<br> [ 0.04271465]]), array([[ 0.08770757],<br> [-0.02304098],<br> [-0.05008243]]), array([[ 0.58513697],<br> [-0.00604693],<br> [-0.1598063 ]]), array([[-0.07233849],<br> [-0.04780769],<br> [-0.06191515]]), array([[ 0.09651254],<br> [ 0.02579441],<br> [-0.00947478]]), array([[ 0.03501638],<br> [-0.02501282],<br> [-0.07304343]]), array([[-0.10470468],<br> [ 0.21112561],<br> [-0.0983761 ]]), array([[-0.12674786],<br> [ 0.1432598 ],<br> [-0.01007719]]), array([[-0.11004829],<br> [ 0.06968173],<br> [ 0.05585313]]), array([[-0.41743998],<br> [ 0.17304611],<br> [ 0.03084559]]), array([[-0.10236722],<br> [ 0.01277654],<br> [-0.03390285]]), array([[0.22726439],<br> [0.14038084],<br> [0.01124049]]), array([[-0.15304123],<br> [ 0.04465005],<br> [ 0.06240299]])]<br>tvecs平移(向量)外参:<br> [array([[145.08681235],<br> [-76.17106891],<br> [699.69778255]]), array([[-183.67717477],<br> [-163.96393688],<br> [ 688.85439168]]), array([[104.0920611 ],<br> [ 23.92271463],<br> [965.22859587]]), array([[ 15.24948656],<br> [-54.85109955],<br> [795.39600843]]), array([[ 198.06011875],<br> [-175.91815396],<br> [ 719.52217088]]), array([[ 44.04717785],<br> [-108.51372353],<br> [ 788.45975705]]), array([[ -26.16828067],<br> [-188.47275832],<br> [ 771.99690841]]), array([[-139.14245711],<br> [-124.82244434],<br> [ 644.34844619]]), array([[ 95.41419669],<br> [-22.10474336],<br> [747.43156932]]), array([[-16.25541066],<br> [-60.23640891],<br> [677.13919736]]), array([[-220.34618611],<br> [ -12.6889694 ],<br> [ 708.18042632]]), array([[-205.93499674],<br> [ -95.59986207],<br> [ 709.15135801]]), array([[253.32869421],<br> [-65.19615285],<br> [793.36052372]]), array([[ 8.6811058 ],<br> [-18.70531877],<br> [786.28091437]]), array([[-135.91340565],<br> [ -41.83864798],<br> [ 734.08050232]]), array([[-10.36373957],<br> [-74.3822385 ],<br> [775.58055384]]), array([[-181.85146859],<br> [-162.51644736],<br> [ 686.77992674]]), array([[-152.68145934],<br> [ -45.11437087],<br> [ 742.99524497]]), array([[ 72.01815541],<br> [-174.95234447],<br> [ 954.17455852]]), array([[-180.90841277],<br> [-186.78922299],<br> [ 694.5911876 ]]), array([[-213.22423756],<br> [-180.87955611],<br> [ 668.22586979]]), array([[220.45960743],<br> [ -3.88665195],<br> [782.2584453 ]]), array([[-118.59571239],<br> [ -51.01586357],<br> [ 905.16719607]]), array([[ 213.87203907],<br> [-198.38786649],<br> [ 766.26267678]]), array([[197.15909792],<br> [-11.90335064],<br> [831.47489862]]), array([[220.76484713],<br> [-60.95718003],<br> [760.66883997]]), array([[117.86186858],<br> [-64.75570632],<br> [768.97222101]]), array([[ -39.59646337],<br> [-165.78421993],<br> [ 736.04088074]]), array([[-123.20719029],<br> [-164.0644578 ],<br> [ 743.43485414]]), array([[-19.65524135],<br> [-69.18741504],<br> [690.47472849]]), array([[-203.72891175],<br> [ -20.1545843 ],<br> [ 718.13434244]]), array([[ 40.16988244],<br> [-68.66550898],<br> [795.54461358]]), array([[-104.02162409],<br> [-101.3265982 ],<br> [ 762.41231116]])]<br>newcameramtx外参 [[578.70690918 0. 286.56697375]<br> [ 0. 768.62420654 341.06051709]<br> [ 0. 0. 1. ]]</p></div></div>
<h1 id="参数解释"><a href="#参数解释" class="headerlink" title="参数解释"></a>参数解释</h1><ul>
<li>cameramtx:相机内参矩阵</li>
<li>dist:相机畸变参数</li>
<li>rvec:输出的旋转向量</li>
<li>tvec:输出的平移矩阵</li>
</ul>
]]></content>
<tags>
<tag>opencv</tag>
</tags>
</entry>
<entry>
<title>python下使用aruco标记进进行检测</title>
<url>/2020/07/15/python-aruco/</url>
<content><![CDATA[<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200715114612.png" alt=""></p>
<a id="more"></a>
<h1 id="ArUco标记"><a href="#ArUco标记" class="headerlink" title="ArUco标记"></a>ArUco标记</h1><p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200715093858.png" alt=""></p>
<p>首先什么是aruco标记呢?</p>
<p>aruco标记是可用于摄像机姿态估计的二进制方形基准标记。它的主要优点是检测简单、快速,并且具有很强的鲁棒性。ArUco 标记是由宽黑色边框和确定其标识符(id)的内部二进制矩阵组成的正方形标记。aruco标记的黑色边框有助于其在图像中的快速检测,内部二进制编码用于识别标记和提供错误检测和纠正。aruco标记尺寸的大小决定内部矩阵的大小,例如尺寸为 4x4 的标记由 16 位二进制数组成。</p>
<p>通俗地说,aruco标记其实就是一种编码,就和我们日常生活中的二维码是相似的,只不过由于编码方式的不同,导致它们存储信息的方式、容量等等有所差异,所以在应用层次上也会有所不同。由于单个aruco标记就可以提供足够的对应关系,例如有四个明显的角点及内部的二进制编码,所以aruco标记被广泛用来增加从二维世界映射到三维世界时的信息量,便于发现二维世界与三维世界之间的投影关系,从而实现姿态估计、相机矫正等等应用。</p>
<p>OpenCV中的ArUco模块包括了对aruco标记的创建和检测,以及将aruco标记用于姿势估计和相机矫正等应用的相关API,同时还提供了标记板等等。本次笔记中主要先整理aruco标记的创建与检测。</p>
<p>首先我们创建aruco标记时,需要先指定一个字典,这个字典表示的是创建出来的aruco标记具有怎样的尺寸、怎样的编码等等内容,我们使用APIgetPredefinedDictionary()来声明我们使用的字典。在OpenCV中,提供了多种预定义字典,我们可以通过PREDEFINED_DICTIONARY_NAME来查看有哪些预定义字典。而且字典名称表示了该字典的aruco标记数量和尺寸,例如DICT_7X7_50表示一个包含了50种7x7位标记的字典。</p>
<hr>
<h1 id="ArUco标记生成器"><a href="#ArUco标记生成器" class="headerlink" title="ArUco标记生成器"></a>ArUco标记生成器</h1><p>在线aruco标记生成器:<a href="http://aruco.dgut.top/" target="_blank" rel="noopener">http://aruco.dgut.top/</a></p>
<p>(备用):<a href="https://chev.me/arucogen/" target="_blank" rel="noopener">https://chev.me/arucogen/</a></p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200715093013.png" alt=""></p>
<h1 id="在OpenCV中生成ArUco标记"><a href="#在OpenCV中生成ArUco标记" class="headerlink" title="在OpenCV中生成ArUco标记"></a>在OpenCV中生成ArUco标记</h1><h2 id="opencv-python生成aruco标记"><a href="#opencv-python生成aruco标记" class="headerlink" title="opencv-python生成aruco标记"></a>opencv-python生成aruco标记</h2><p>确定好我们需要的字典后,就可以通过API<code>drawMarker()</code>来绘制出aruco标记,其参数含义如下:</p>
<figure class="highlight python"><table><tr><td class="code"><pre><span class="line"><span class="keyword">import</span> cv2</span><br><span class="line"><span class="keyword">import</span> numpy <span class="keyword">as</span> np</span><br><span class="line"><span class="comment"># 生成aruco标记</span></span><br><span class="line"><span class="comment"># 加载预定义的字典</span></span><br><span class="line">dictionary = cv2.aruco.Dictionary_get(cv2.aruco.DICT_6X6_250)</span><br><span class="line"></span><br><span class="line"><span class="comment"># 生成标记</span></span><br><span class="line">markerImage = np.zeros((<span class="number">200</span>, <span class="number">200</span>), dtype=np.uint8)</span><br><span class="line">markerImage = cv2.aruco.drawMarker(dictionary, <span class="number">22</span>, <span class="number">200</span>, markerImage, <span class="number">1</span>)</span><br><span class="line">cv2.imwrite(<span class="string">"marker22.png"</span>, markerImage)</span><br></pre></td></tr></table></figure>
<blockquote>
<p>opencv的aruco模块共有25个预定义的标记词典。每个词典中所有的Aruco标记均包含相同数量的块或位(例如4×4、5×5、6×6或7×7),且每个词典中Aruco标记的数量固定(例如50、100、250或1000)。</p>
</blockquote>
<p><code>cv2.aruco.Dictionary_get()</code>函数会加载<code>cv2.aruco.DICT_6X6_250</code>包含250个标记的字典,其中每个标记都是6×6位二进制模式</p>
<p><code>cv2.aruco.drawMarker(dictionary, 22, 200, markerImage, 1)</code>中的第二个参数<code>22</code>是aruco的标记id(0~249),第三个参数决定生成的标记的大小,在上面的示例中,它将生成<code>200×200</code>像素的图像,第四个参数表示将要存储aruco标记的对象(上面的<code>markerImage</code>),最后,第五个参数是边界宽度参数,它决定应将多少位(块)作为边界添加到生成的二进制图案中。</p>
<p>执行后将会生成这样的标记:标记id分别是<code>22</code></p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200715091518.png" alt=""></p>
<details>
<summary>展开所支持的标记字典</summary>
<pre><code>
展开查看的内容;
DICT_4X4_50
Python: cv.aruco.DICT_4X4_50
DICT_4X4_100
Python: cv.aruco.DICT_4X4_100
DICT_4X4_250
Python: cv.aruco.DICT_4X4_250
DICT_4X4_1000
Python: cv.aruco.DICT_4X4_1000
DICT_5X5_50
Python: cv.aruco.DICT_5X5_50
DICT_5X5_100
Python: cv.aruco.DICT_5X5_100
DICT_5X5_250
Python: cv.aruco.DICT_5X5_250
DICT_5X5_1000
Python: cv.aruco.DICT_5X5_1000
DICT_6X6_50
Python: cv.aruco.DICT_6X6_50
DICT_6X6_100
Python: cv.aruco.DICT_6X6_100
DICT_6X6_250
Python: cv.aruco.DICT_6X6_250
DICT_6X6_1000
Python: cv.aruco.DICT_6X6_1000
DICT_7X7_50
Python: cv.aruco.DICT_7X7_50
DICT_7X7_100
Python: cv.aruco.DICT_7X7_100
DICT_7X7_250
Python: cv.aruco.DICT_7X7_250
DICT_7X7_1000
Python: cv.aruco.DICT_7X7_1000
DICT_ARUCO_ORIGINAL
Python: cv.aruco.DICT_ARUCO_ORIGINAL
DICT_APRILTAG_16h5
Python: cv.aruco.DICT_APRILTAG_16h5
4x4 bits, minimum hamming distance between any two codes = 5, 30 codes
</code></pre>
</details>
-----
<h2 id="批量生成aruco标记"><a href="#批量生成aruco标记" class="headerlink" title="批量生成aruco标记"></a>批量生成aruco标记</h2><figure class="highlight python"><table><tr><td class="code"><pre><span class="line"><span class="keyword">import</span> cv2</span><br><span class="line"><span class="keyword">import</span> numpy <span class="keyword">as</span> np</span><br><span class="line"><span class="comment"># 生成aruco标记</span></span><br><span class="line"><span class="comment"># 加载预定义的字典</span></span><br><span class="line">dictionary = cv2.aruco.Dictionary_get(cv2.aruco.DICT_6X6_250)</span><br><span class="line"></span><br><span class="line"><span class="comment"># 生成标记</span></span><br><span class="line">markerImage = np.zeros((<span class="number">200</span>, <span class="number">200</span>), dtype=np.uint8)</span><br><span class="line"><span class="keyword">for</span> i <span class="keyword">in</span> range(<span class="number">30</span>):</span><br><span class="line"> markerImage = cv2.aruco.drawMarker(dictionary, i, <span class="number">200</span>, markerImage, <span class="number">1</span>);</span><br><span class="line"></span><br><span class="line"> firename=<span class="string">'armark/'</span>+str(i)+<span class="string">'.png'</span></span><br><span class="line"> cv2.imwrite(firename, markerImage);</span><br></pre></td></tr></table></figure>
<p>在armark文件夹下会生成一系列的6*6 <code>aruco标记</code></p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200716102446.png" alt=""></p>
<hr>
<h1 id="Aruco标记的检测和定位"><a href="#Aruco标记的检测和定位" class="headerlink" title="Aruco标记的检测和定位"></a>Aruco标记的检测和定位</h1><h2 id="静态检测"><a href="#静态检测" class="headerlink" title="静态检测"></a>静态检测</h2><p>在环境中图像检测Aruco标记,环境中有7个标记</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200716104948.png" alt=""></p>
<figure class="highlight python"><table><tr><td class="code"><pre><span class="line"><span class="keyword">import</span> numpy <span class="keyword">as</span> np</span><br><span class="line"><span class="keyword">import</span> time</span><br><span class="line"><span class="keyword">import</span> cv2</span><br><span class="line"><span class="keyword">import</span> cv2.aruco <span class="keyword">as</span> aruco</span><br><span class="line"><span class="comment">#读取图片</span></span><br><span class="line">frame=cv2.imread(<span class="string">'IMG_3739.jpg'</span>)</span><br><span class="line"><span class="comment">#调整图片大小</span></span><br><span class="line">frame=cv2.resize(frame,<span class="literal">None</span>,fx=<span class="number">0.2</span>,fy=<span class="number">0.2</span>,interpolation=cv2.INTER_CUBIC)</span><br><span class="line"><span class="comment">#灰度话</span></span><br><span class="line">gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)</span><br><span class="line"><span class="comment">#设置预定义的字典</span></span><br><span class="line">aruco_dict = aruco.Dictionary_get(aruco.DICT_6X6_250)</span><br><span class="line"><span class="comment">#使用默认值初始化检测器参数</span></span><br><span class="line">parameters = aruco.DetectorParameters_create()</span><br><span class="line"><span class="comment">#使用aruco.detectMarkers()函数可以检测到marker,返回ID和标志板的4个角点坐标</span></span><br><span class="line">corners, ids, rejectedImgPoints = aruco.detectMarkers(gray,aruco_dict,parameters=parameters)</span><br><span class="line"><span class="comment">#画出标志位置</span></span><br><span class="line">aruco.drawDetectedMarkers(frame, corners,ids)</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">cv2.imshow(<span class="string">"frame"</span>,frame)</span><br><span class="line">cv2.waitKey(<span class="number">0</span>)</span><br><span class="line">cv2.destroyAllWindows()</span><br></pre></td></tr></table></figure>
<blockquote>
<p>对于每次成功检测到标记,将按从左上,右上,右下和左下的顺序检测标记的四个角点。在C ++中,将这4个检测到的角点存储为点矢量,并将图像中的多个标记一起存储在点矢量容器中。在Python中,它们存储为Numpy 数组。</p>
<p><code>detectMarkers()</code>函数用于检测和确定标记角点的位置。</p>
<ul>
<li>第一个参数<code>image</code>是带有标记的场景图像。</li>
<li>第二个参数<code>dictionary</code>是用于生成标记的字典。成功检测到的标记将存储在markerCorners中,其ID存储在markerIds中。先前初始化的DetectorParameters对象作为传递参数。</li>
<li>第三个参数<code>parameters</code>: <code>DetectionParameters</code> 类的对象,该对象包括在检测过程中可以自定义的所有参数;</li>
<li>返回参数<code>corners</code>:检测到的aruco标记的角点列表,对于每个标记,其四个角点均按其原始顺序返回(从右上角开始顺时针旋转),第一个角是右上角,然后是右下角,左下角和左上角。</li>
<li>返回<code>ids</code>:检测到的每个标记的 id,需要注意的是第三个参数和第四个参数具有相同的大小;</li>
<li>返回参数<code>rejectedImgPoints</code>:抛弃的候选标记列表,即检测到的、但未提供有效编码的正方形。每个候选标记也由其四个角定义,其格式与第三个参数相同,该参数若无特殊要求可以省略。</li>
</ul>
</blockquote>
<figure class="highlight python"><table><tr><td class="code"><pre><span class="line">corners, ids, rejectedImgPoints = aruco.detectMarkers(gray,aruco_dict,parameters=parameters)</span><br></pre></td></tr></table></figure>
<p>当我们检测到aruco标签之后,为了方便观察,我们需要进行可视化操作,把标签标记出来:使用<code>drawDetectedMarkers()</code>这个API来绘制检测到的aruco标记,其参数含义如下:</p>
<blockquote>
<ul>
<li>参数image: 是将绘制标记的输入 / 输出图像(通常就是检测到标记的图像)</li>
<li>参数corners:检测到的aruco标记的角点列表</li>
<li>参数ids:检测到的每个标记对应到其所属字典中的id,可选(如果未提供)不会绘制ID。</li>
<li>参数borderColor:绘制标记外框的颜色,其余颜色(文本颜色和第一个角颜色)将基于该颜色进行计算,以提高可视化效果。</li>
<li>无返回值</li>
</ul>
</blockquote>
<figure class="highlight reasonml"><table><tr><td class="code"><pre><span class="line">aruco.draw<span class="constructor">DetectedMarkers(<span class="params">image</span>, <span class="params">corners</span>,<span class="params">ids</span>,<span class="params">borderColor</span>)</span></span><br></pre></td></tr></table></figure>
<p>效果演示:</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200716105125.png" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200716111938.png" alt=""><br><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200716111939.png" alt=""><br><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200716111940.png" alt=""></p>
<h2 id="动态检测"><a href="#动态检测" class="headerlink" title="动态检测"></a>动态检测</h2><p>利用摄像头进行一个实时动态监测aruco标记并且估计姿势,摄像头的内参需要提前标定,如何标定请看我<a href="https://blog.dgut.top/2020/07/20/opencv-biaoding/">另一篇文章</a></p>
<figure class="highlight python"><table><tr><td class="code"><pre><span class="line"><span class="keyword">import</span> numpy <span class="keyword">as</span> np</span><br><span class="line"><span class="keyword">import</span> time</span><br><span class="line"><span class="keyword">import</span> cv2</span><br><span class="line"><span class="keyword">import</span> cv2.aruco <span class="keyword">as</span> aruco</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment"># mtx = np.array([</span></span><br><span class="line"><span class="comment"># [2946.48, 0, 1980.53],</span></span><br><span class="line"><span class="comment"># [ 0, 2945.41, 1129.25],</span></span><br><span class="line"><span class="comment"># [ 0, 0, 1],</span></span><br><span class="line"><span class="comment"># ])</span></span><br><span class="line"><span class="comment"># #我的手机拍棋盘的时候图片大小是 4000 x 2250</span></span><br><span class="line"><span class="comment"># #ip摄像头拍视频的时候设置的是 1920 x 1080,长宽比是一样的,</span></span><br><span class="line"><span class="comment"># #ip摄像头设置分辨率的时候注意一下</span></span><br><span class="line"><span class="comment">#</span></span><br><span class="line"><span class="comment">#</span></span><br><span class="line"><span class="comment"># dist = np.array( [0.226317, -1.21478, 0.00170689, -0.000334551, 1.9892] )</span></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment">#相机纠正参数</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># dist=np.array(([[-0.51328742, 0.33232725 , 0.01683581 ,-0.00078608, -0.1159959]]))</span></span><br><span class="line"><span class="comment">#</span></span><br><span class="line"><span class="comment"># mtx=np.array([[464.73554153, 0.00000000e+00 ,323.989155],</span></span><br><span class="line"><span class="comment"># [ 0., 476.72971528 ,210.92028],</span></span><br><span class="line"><span class="comment"># [ 0., 0., 1. ]])</span></span><br><span class="line">dist=np.array(([[<span class="number">-0.58650416</span> , <span class="number">0.59103816</span>, <span class="number">-0.00443272</span> , <span class="number">0.00357844</span> ,<span class="number">-0.27203275</span>]]))</span><br><span class="line">newcameramtx=np.array([[<span class="number">189.076828</span> , <span class="number">0.</span> , <span class="number">361.20126638</span>]</span><br><span class="line"> ,[ <span class="number">0</span> ,<span class="number">2.01627296e+04</span> ,<span class="number">4.52759577e+02</span>]</span><br><span class="line"> ,[<span class="number">0</span>, <span class="number">0</span>, <span class="number">1</span>]])</span><br><span class="line">mtx=np.array([[<span class="number">398.12724231</span> , <span class="number">0.</span> , <span class="number">304.35638757</span>],</span><br><span class="line"> [ <span class="number">0.</span> , <span class="number">345.38259888</span>, <span class="number">282.49861858</span>],</span><br><span class="line"> [ <span class="number">0.</span>, <span class="number">0.</span>, <span class="number">1.</span> ]])</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line">cap = cv2.VideoCapture(<span class="number">0</span>)</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">font = cv2.FONT_HERSHEY_SIMPLEX <span class="comment">#font for displaying text (below)</span></span><br><span class="line"></span><br><span class="line"><span class="comment">#num = 0</span></span><br><span class="line"><span class="keyword">while</span> <span class="literal">True</span>:</span><br><span class="line"> ret, frame = cap.read()</span><br><span class="line"> h1, w1 = frame.shape[:<span class="number">2</span>]</span><br><span class="line"> <span class="comment"># 读取摄像头画面</span></span><br><span class="line"> <span class="comment"># 纠正畸变</span></span><br><span class="line"> newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx, dist, (h1, w1), <span class="number">0</span>, (h1, w1))</span><br><span class="line"> dst1 = cv2.undistort(frame, mtx, dist, <span class="literal">None</span>, newcameramtx)</span><br><span class="line"> x, y, w1, h1 = roi</span><br><span class="line"> dst1 = dst1[y:y + h1, x:x + w1]</span><br><span class="line"> frame=dst1</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)</span><br><span class="line"> aruco_dict = aruco.Dictionary_get(aruco.DICT_6X6_250)</span><br><span class="line"> parameters = aruco.DetectorParameters_create()</span><br><span class="line"> dst1 = cv2.undistort(frame, mtx, dist, <span class="literal">None</span>, newcameramtx)</span><br><span class="line"> <span class="string">'''</span></span><br><span class="line"><span class="string"> detectMarkers(...)</span></span><br><span class="line"><span class="string"> detectMarkers(image, dictionary[, corners[, ids[, parameters[, rejectedI</span></span><br><span class="line"><span class="string"> mgPoints]]]]) -> corners, ids, rejectedImgPoints</span></span><br><span class="line"><span class="string"> '''</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">#使用aruco.detectMarkers()函数可以检测到marker,返回ID和标志板的4个角点坐标</span></span><br><span class="line"> corners, ids, rejectedImgPoints = aruco.detectMarkers(gray,aruco_dict,parameters=parameters)</span><br><span class="line"></span><br><span class="line"><span class="comment"># 如果找不打id</span></span><br><span class="line"> <span class="keyword">if</span> ids <span class="keyword">is</span> <span class="keyword">not</span> <span class="literal">None</span>:</span><br><span class="line"></span><br><span class="line"> rvec, tvec, _ = aruco.estimatePoseSingleMarkers(corners, <span class="number">0.05</span>, mtx, dist)</span><br><span class="line"> <span class="comment"># 估计每个标记的姿态并返回值rvet和tvec ---不同</span></span><br><span class="line"> <span class="comment"># from camera coeficcients</span></span><br><span class="line"> (rvec-tvec).any() <span class="comment"># get rid of that nasty numpy value array error</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># aruco.drawAxis(frame, mtx, dist, rvec, tvec, 0.1) #绘制轴</span></span><br><span class="line"><span class="comment"># aruco.drawDetectedMarkers(frame, corners) #在标记周围画一个正方形</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(rvec.shape[<span class="number">0</span>]):</span><br><span class="line"> aruco.drawAxis(frame, mtx, dist, rvec[i, :, :], tvec[i, :, :], <span class="number">0.03</span>)</span><br><span class="line"> aruco.drawDetectedMarkers(frame, corners)</span><br><span class="line"> <span class="comment">###### DRAW ID #####</span></span><br><span class="line"> cv2.putText(frame, <span class="string">"Id: "</span> + str(ids), (<span class="number">0</span>,<span class="number">64</span>), font, <span class="number">1</span>, (<span class="number">0</span>,<span class="number">255</span>,<span class="number">0</span>),<span class="number">2</span>,cv2.LINE_AA)</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> <span class="comment">##### DRAW "NO IDS" #####</span></span><br><span class="line"> cv2.putText(frame, <span class="string">"No Ids"</span>, (<span class="number">0</span>,<span class="number">64</span>), font, <span class="number">1</span>, (<span class="number">0</span>,<span class="number">255</span>,<span class="number">0</span>),<span class="number">2</span>,cv2.LINE_AA)</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> <span class="comment"># 显示结果框架</span></span><br><span class="line"> cv2.imshow(<span class="string">"frame"</span>,frame)</span><br><span class="line"></span><br><span class="line"> key = cv2.waitKey(<span class="number">1</span>)</span><br><span class="line"></span><br><span class="line"> <span class="keyword">if</span> key == <span class="number">27</span>: <span class="comment"># 按esc键退出</span></span><br><span class="line"> print(<span class="string">'esc break...'</span>)</span><br><span class="line"> cap.release()</span><br><span class="line"> cv2.destroyAllWindows()</span><br><span class="line"> <span class="keyword">break</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">if</span> key == ord(<span class="string">' '</span>): <span class="comment"># 按空格键保存</span></span><br><span class="line"><span class="comment"># num = num + 1</span></span><br><span class="line"><span class="comment"># filename = "frames_%s.jpg" % num # 保存一张图像</span></span><br><span class="line"> filename = str(time.time())[:<span class="number">10</span>] + <span class="string">".jpg"</span></span><br><span class="line"> cv2.imwrite(filename, frame)</span><br></pre></td></tr></table></figure>
<h3 id="效果"><a href="#效果" class="headerlink" title="效果"></a>效果</h3><p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200715113014.png" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200715113031.png" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200715113042.png" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200716120123.png" alt=""></p>
<h1 id="附件"><a href="#附件" class="headerlink" title="附件"></a>附件</h1><h2 id="相机标定,并且写入文件保存标定文件"><a href="#相机标定,并且写入文件保存标定文件" class="headerlink" title="相机标定,并且写入文件保存标定文件"></a>相机标定,并且写入文件保存标定文件</h2><figure class="highlight python"><table><tr><td class="code"><pre><span class="line"><span class="keyword">import</span> cv2</span><br><span class="line"><span class="keyword">import</span> numpy <span class="keyword">as</span> np</span><br><span class="line"><span class="keyword">import</span> glob</span><br><span class="line"><span class="keyword">import</span> matplotlib.pyplot <span class="keyword">as</span> plt</span><br><span class="line"><span class="keyword">import</span> matplotlib.patches <span class="keyword">as</span> patches</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment"># 找棋盘格角点标定并且写入文件</span></span><br><span class="line"></span><br><span class="line">criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, <span class="number">30</span>, <span class="number">0.001</span>) <span class="comment"># 阈值</span></span><br><span class="line"><span class="comment">#棋盘格模板规格</span></span><br><span class="line">w = <span class="number">9</span> <span class="comment"># 10 - 1</span></span><br><span class="line">h = <span class="number">6</span> <span class="comment"># 7 - 1</span></span><br><span class="line"><span class="comment"># 世界坐标系中的棋盘格点,例如(0,0,0), (1,0,0), (2,0,0) ....,(8,5,0),去掉Z坐标,记为二维矩阵</span></span><br><span class="line">objp = np.zeros((w*h,<span class="number">3</span>), np.float32)</span><br><span class="line">objp[:,:<span class="number">2</span>] = np.mgrid[<span class="number">0</span>:w,<span class="number">0</span>:h].T.reshape(<span class="number">-1</span>,<span class="number">2</span>)</span><br><span class="line">objp = objp*<span class="number">18.1</span> <span class="comment"># 18.1 mm</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># 储存棋盘格角点的世界坐标和图像坐标对</span></span><br><span class="line">objpoints = [] <span class="comment"># 在世界坐标系中的三维点</span></span><br><span class="line">imgpoints = [] <span class="comment"># 在图像平面的二维点</span></span><br><span class="line"></span><br><span class="line">images = glob.glob(<span class="string">'./pic/*.jpg'</span>) <span class="comment"># 拍摄的十几张棋盘图片所在目录</span></span><br><span class="line"></span><br><span class="line">i = <span class="number">1</span></span><br><span class="line"><span class="keyword">for</span> fname <span class="keyword">in</span> images:</span><br><span class="line"></span><br><span class="line"> img = cv2.imread(fname)</span><br><span class="line"> <span class="comment"># 获取画面中心点</span></span><br><span class="line"></span><br><span class="line"> h1, w1 = img.shape[<span class="number">0</span>], img.shape[<span class="number">1</span>]</span><br><span class="line"> gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)</span><br><span class="line"> u, v = img.shape[:<span class="number">2</span>]</span><br><span class="line"> <span class="comment"># 找到棋盘格角点</span></span><br><span class="line"> ret, corners = cv2.findChessboardCorners(gray, (w,h),<span class="literal">None</span>)</span><br><span class="line"> <span class="comment"># 如果找到足够点对,将其存储起来</span></span><br><span class="line"> <span class="keyword">if</span> ret == <span class="literal">True</span>:</span><br><span class="line"> print(<span class="string">"i:"</span>, i)</span><br><span class="line"> i = i+<span class="number">1</span></span><br><span class="line"></span><br><span class="line"> cv2.cornerSubPix(gray,corners,(<span class="number">11</span>,<span class="number">11</span>),(<span class="number">-1</span>,<span class="number">-1</span>),criteria)</span><br><span class="line"> objpoints.append(objp)</span><br><span class="line"> imgpoints.append(corners)</span><br><span class="line"> <span class="comment"># 将角点在图像上显示</span></span><br><span class="line"> cv2.drawChessboardCorners(img, (w,h), corners, ret)</span><br><span class="line"> cv2.namedWindow(<span class="string">'findCorners'</span>, cv2.WINDOW_NORMAL)</span><br><span class="line"> cv2.resizeWindow(<span class="string">'findCorners'</span>, <span class="number">640</span>, <span class="number">480</span>)</span><br><span class="line"> cv2.imshow(<span class="string">'findCorners'</span>,img)</span><br><span class="line"> cv2.waitKey(<span class="number">200</span>)</span><br><span class="line">cv2.destroyAllWindows()</span><br><span class="line"><span class="comment">#%% 标定</span></span><br><span class="line">print(<span class="string">'正在计算'</span>)</span><br><span class="line">ret, mtx, dist, rvecs, tvecs = \</span><br><span class="line"> cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::<span class="number">-1</span>], <span class="literal">None</span>, <span class="literal">None</span>)</span><br><span class="line">cv_file=cv2.FileStorage(<span class="string">"camera.yaml"</span>,cv2.FILE_STORAGE_WRITE)</span><br><span class="line">cv_file.write(<span class="string">"camera_matrix"</span>,mtx)</span><br><span class="line">cv_file.write(<span class="string">"dist_coeff"</span>,dist)</span><br><span class="line"><span class="comment"># 请注意,*释放*不会关闭()FileStorage对象</span></span><br><span class="line"></span><br><span class="line">cv_file.release()</span><br><span class="line"></span><br><span class="line">print(<span class="string">"ret:"</span>,ret )</span><br><span class="line">print(<span class="string">"mtx:\n"</span>,mtx) <span class="comment"># 内参数矩阵</span></span><br><span class="line">print(<span class="string">"dist畸变值:\n"</span>,dist ) <span class="comment"># 畸变系数 distortion cofficients = (k_1,k_2,p_1,p_2,k_3)</span></span><br><span class="line">print(<span class="string">"rvecs旋转(向量)外参:\n"</span>,rvecs) <span class="comment"># 旋转向量 # 外参数</span></span><br><span class="line">print(<span class="string">"tvecs平移(向量)外参:\n"</span>,tvecs ) <span class="comment"># 平移向量 # 外参数</span></span><br><span class="line">newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx, dist, (u, v), <span class="number">0</span>, (u, v))</span><br><span class="line">print(<span class="string">'newcameramtx外参'</span>,newcameramtx)</span><br><span class="line">camera=cv2.VideoCapture(<span class="number">0</span>)</span><br><span class="line"></span><br><span class="line"><span class="comment"># dist=np.array(([[-0.3918239532375715, 0.1553689004591761, 0.001069066277469635, 2.175204930902934e-06, -0.02850420360197434]]))</span></span><br><span class="line"><span class="comment"># # newcameramtx=np.array([[1.85389837e+04 ,0.00000000e+00, 5.48743017e+02]</span></span><br><span class="line"><span class="comment"># # ,[ 0 ,2.01627296e+04 ,4.52759577e+02]</span></span><br><span class="line"><span class="comment"># # ,[0, 0, 1]])</span></span><br><span class="line"><span class="comment"># mtx=np.array([[379.1368428730273, 0, 312.1210537268028],</span></span><br><span class="line"><span class="comment"># [ 0, 381.6396537294123, 242.492484246843],</span></span><br><span class="line"><span class="comment"># [ 0., 0., 1. ]])</span></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="keyword">while</span> <span class="literal">True</span>:</span><br><span class="line"> (grabbed,frame)=camera.read()</span><br><span class="line"> h1, w1 = frame.shape[:<span class="number">2</span>]</span><br><span class="line"> <span class="comment">#打开标定文件</span></span><br><span class="line"> cv_file = cv2.FileStorage(<span class="string">"camera.yaml"</span>, cv2.FILE_STORAGE_READ)</span><br><span class="line"> camera_matrix = cv_file.getNode(<span class="string">"camera_matrix"</span>).mat()</span><br><span class="line"> dist_matrix = cv_file.getNode(<span class="string">"dist_coeff"</span>).mat()</span><br><span class="line"> cv_file.release()</span><br><span class="line"></span><br><span class="line"> newcameramtx, roi = cv2.getOptimalNewCameraMatrix(camera_matrix, dist_matrix, (u, v), <span class="number">0</span>, (u, v))</span><br><span class="line"> <span class="comment"># 纠正畸变</span></span><br><span class="line"> dst1 = cv2.undistort(frame, camera_matrix, dist_matrix, <span class="literal">None</span>, newcameramtx)</span><br><span class="line"> <span class="comment">#dst2 = cv2.undistort(frame, mtx, dist, None, newcameramtx)</span></span><br><span class="line"> mapx,mapy=cv2.initUndistortRectifyMap(camera_matrix,dist_matrix,<span class="literal">None</span>,newcameramtx,(w1,h1),<span class="number">5</span>)</span><br><span class="line"> dst2=cv2.remap(frame,mapx,mapy,cv2.INTER_LINEAR)</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> <span class="comment"># 裁剪图像,输出纠正畸变以后的图片</span></span><br><span class="line"> x, y, w1, h1 = roi</span><br><span class="line"> dst1 = dst1[y:y + h1, x:x + w1]</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> cv2.imshow(<span class="string">'dst1'</span>,dst1)</span><br><span class="line"> <span class="comment">#cv2.imshow('dst2', dst2)</span></span><br><span class="line"> <span class="keyword">if</span> cv2.waitKey(<span class="number">1</span>) & <span class="number">0xFF</span> == ord(<span class="string">'q'</span>): <span class="comment"># 按q保存一张图片</span></span><br><span class="line"> cv2.imwrite(<span class="string">"../u4/frame.jpg"</span>, dst1)</span><br><span class="line"> <span class="keyword">break</span></span><br><span class="line"></span><br><span class="line">camera.release()</span><br><span class="line">cv2.destroyAllWindows()</span><br></pre></td></tr></table></figure>
<h2 id="利用标定文件检测aruco标签"><a href="#利用标定文件检测aruco标签" class="headerlink" title="利用标定文件检测aruco标签"></a>利用标定文件检测aruco标签</h2><figure class="highlight python"><table><tr><td class="code"><pre><span class="line"><span class="keyword">import</span> numpy <span class="keyword">as</span> np</span><br><span class="line"><span class="keyword">import</span> time</span><br><span class="line"><span class="keyword">import</span> cv2</span><br><span class="line"><span class="keyword">import</span> cv2.aruco <span class="keyword">as</span> aruco</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment">#加载相机纠正参数</span></span><br><span class="line">cv_file = cv2.FileStorage(<span class="string">"yuyan.yaml"</span>, cv2.FILE_STORAGE_READ)</span><br><span class="line">camera_matrix = cv_file.getNode(<span class="string">"camera_matrix"</span>).mat()</span><br><span class="line">dist_matrix = cv_file.getNode(<span class="string">"dist_coeff"</span>).mat()</span><br><span class="line">cv_file.release()</span><br><span class="line"></span><br><span class="line"><span class="comment"># dist=np.array(([[-0.51328742, 0.33232725 , 0.01683581 ,-0.00078608, -0.1159959]]))</span></span><br><span class="line"><span class="comment">#</span></span><br><span class="line"><span class="comment"># mtx=np.array([[464.73554153, 0.00000000e+00 ,323.989155],</span></span><br><span class="line"><span class="comment"># [ 0., 476.72971528 ,210.92028],</span></span><br><span class="line"><span class="comment"># [ 0., 0., 1. ]])</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># dist=np.array(([[-0.58650416 , 0.59103816, -0.00443272 , 0.00357844 ,-0.27203275]]))</span></span><br><span class="line"><span class="comment"># newcameramtx=np.array([[189.076828 , 0. , 361.20126638]</span></span><br><span class="line"><span class="comment"># ,[ 0 ,2.01627296e+04 ,4.52759577e+02]</span></span><br><span class="line"><span class="comment"># ,[0, 0, 1]])</span></span><br><span class="line"><span class="comment"># mtx=np.array([[398.12724231 , 0. , 304.35638757],</span></span><br><span class="line"><span class="comment"># [ 0. , 345.38259888, 282.49861858],</span></span><br><span class="line"><span class="comment"># [ 0., 0., 1. ]])</span></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line">cap = cv2.VideoCapture(<span class="number">0</span>)</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">font = cv2.FONT_HERSHEY_SIMPLEX <span class="comment">#font for displaying text (below)</span></span><br><span class="line"></span><br><span class="line"><span class="comment">#num = 0</span></span><br><span class="line"><span class="keyword">while</span> <span class="literal">True</span>:</span><br><span class="line"> ret, frame = cap.read()</span><br><span class="line"> h1, w1 = frame.shape[:<span class="number">2</span>]</span><br><span class="line"> <span class="comment"># 读取摄像头画面</span></span><br><span class="line"> <span class="comment"># 纠正畸变</span></span><br><span class="line"> newcameramtx, roi = cv2.getOptimalNewCameraMatrix(camera_matrix, dist_matrix, (h1, w1), <span class="number">0</span>, (h1, w1))</span><br><span class="line"> dst1 = cv2.undistort(frame, camera_matrix, dist_matrix, <span class="literal">None</span>, newcameramtx)</span><br><span class="line"> x, y, w1, h1 = roi</span><br><span class="line"> dst1 = dst1[y:y + h1, x:x + w1]</span><br><span class="line"> frame=dst1</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)</span><br><span class="line"> aruco_dict = aruco.Dictionary_get(aruco.DICT_6X6_250)</span><br><span class="line"> parameters = aruco.DetectorParameters_create()</span><br><span class="line"> <span class="comment">#dst1 = cv2.undistort(frame, mtx, dist, None, newcameramtx)</span></span><br><span class="line"> <span class="string">'''</span></span><br><span class="line"><span class="string"> detectMarkers(...)</span></span><br><span class="line"><span class="string"> detectMarkers(image, dictionary[, corners[, ids[, parameters[, rejectedI</span></span><br><span class="line"><span class="string"> mgPoints]]]]) -> corners, ids, rejectedImgPoints</span></span><br><span class="line"><span class="string"> '''</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">#使用aruco.detectMarkers()函数可以检测到marker,返回ID和标志板的4个角点坐标</span></span><br><span class="line"> corners, ids, rejectedImgPoints = aruco.detectMarkers(gray,aruco_dict,parameters=parameters)</span><br><span class="line"></span><br><span class="line"><span class="comment"># 如果找不打id</span></span><br><span class="line"> <span class="keyword">if</span> ids <span class="keyword">is</span> <span class="keyword">not</span> <span class="literal">None</span>:</span><br><span class="line"></span><br><span class="line"> rvec, tvec, _ = aruco.estimatePoseSingleMarkers(corners, <span class="number">0.05</span>, camera_matrix, dist_matrix)</span><br><span class="line"> <span class="comment"># 估计每个标记的姿态并返回值rvet和tvec ---不同</span></span><br><span class="line"> <span class="comment"># from camera coeficcients</span></span><br><span class="line"> (rvec-tvec).any() <span class="comment"># get rid of that nasty numpy value array error</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># aruco.drawAxis(frame, mtx, dist, rvec, tvec, 0.1) #绘制轴</span></span><br><span class="line"><span class="comment"># aruco.drawDetectedMarkers(frame, corners) #在标记周围画一个正方形</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(rvec.shape[<span class="number">0</span>]):</span><br><span class="line"> aruco.drawAxis(frame, camera_matrix, dist_matrix, rvec[i, :, :], tvec[i, :, :], <span class="number">0.03</span>)</span><br><span class="line"> aruco.drawDetectedMarkers(frame, corners,ids)</span><br><span class="line"> <span class="comment">###### DRAW ID #####</span></span><br><span class="line"> cv2.putText(frame, <span class="string">"Id: "</span> + str(ids), (<span class="number">0</span>,<span class="number">64</span>), font, <span class="number">1</span>, (<span class="number">0</span>,<span class="number">255</span>,<span class="number">0</span>),<span class="number">2</span>,cv2.LINE_AA)</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> <span class="comment">##### DRAW "NO IDS" #####</span></span><br><span class="line"> cv2.putText(frame, <span class="string">"No Ids"</span>, (<span class="number">0</span>,<span class="number">64</span>), font, <span class="number">1</span>, (<span class="number">0</span>,<span class="number">255</span>,<span class="number">0</span>),<span class="number">2</span>,cv2.LINE_AA)</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> <span class="comment"># 显示结果框架</span></span><br><span class="line"> cv2.imshow(<span class="string">"frame"</span>,frame)</span><br><span class="line"></span><br><span class="line"> key = cv2.waitKey(<span class="number">1</span>)</span><br><span class="line"></span><br><span class="line"> <span class="keyword">if</span> key == <span class="number">27</span>: <span class="comment"># 按esc键退出</span></span><br><span class="line"> print(<span class="string">'esc break...'</span>)</span><br><span class="line"> cap.release()</span><br><span class="line"> cv2.destroyAllWindows()</span><br><span class="line"> <span class="keyword">break</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">if</span> key == ord(<span class="string">' '</span>): <span class="comment"># 按空格键保存</span></span><br><span class="line"><span class="comment"># num = num + 1</span></span><br><span class="line"><span class="comment"># filename = "frames_%s.jpg" % num # 保存一张图像</span></span><br><span class="line"> filename = str(time.time())[:<span class="number">10</span>] + <span class="string">".jpg"</span></span><br><span class="line"> cv2.imwrite(filename, frame)</span><br></pre></td></tr></table></figure>
<p>其中<code>yuyan.yaml</code>为保存的标定文件,利用cv2.FileStorage(“yuyan.yaml”, cv2.FILE_STORAGE_READ)及cv_file.getNode(“camera_matrix”).mat()加载</p>
<p>本文参考:</p>
<p>1.<a href="https://blog.csdn.net/sinat_17456165/article/details/105649131" target="_blank" rel="noopener">https://blog.csdn.net/sinat_17456165/article/details/105649131</a></p>
<p>2.<a href="https://www.learnopencv.com/augmented-reality-using-aruco-markers-in-opencv-c-python/" target="_blank" rel="noopener">https://www.learnopencv.com/augmented-reality-using-aruco-markers-in-opencv-c-python/</a></p>
]]></content>
<tags>
<tag>opencv</tag>
<tag>aruco</tag>
<tag>python</tag>
</tags>
</entry>
<entry>
<title>你多久没有仰望夜空了</title>
<url>/2020/07/28/night-thinks/</url>
<content><![CDATA[<p>对夜视的一些看法:</p>
<p>(以下为全彩色夜视仪的实际效果)</p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200728134952.jpg" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200728134919.jpg" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200728135034.jpg" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200728142354.jpg" alt=""><br><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200728142355.jpg" alt=""><br><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200728142356.jpg" alt=""><br><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200728142357.jpg" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200728135053.jpg" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200728162818.png" alt=""></p>
]]></content>
<tags>
<tag>game</tag>
</tags>
</entry>
<entry>
<title>基于opencv4.0 pyzbar和python实现二维码实时检测+定位</title>
<url>/2020/07/13/qrde/</url>
<content><![CDATA[<h1 id="项目启动"><a href="#项目启动" class="headerlink" title="项目启动"></a>项目启动</h1><p>这是我入职以来第一个任务吧,要完成巡检机器人的一个视觉定位功能,目前想的是机器人通过摄像头检测到张贴在室内各个定点位置二维码,通过识别二维码内部的信息和定制二维码的大小,获取到机器人的位置。</p>
<p><img src= "/img/loading.gif" data-src="https://gitee.com/usg1024/imgshow/raw/master/img/20200713114358.png" alt=""></p>
<a id="more"></a>
<p>(机器人现在还是冰山一脚,目前很多功能都没有实现,还处于项目的初级阶段)</p>
<h1 id="前期准备"><a href="#前期准备" class="headerlink" title="前期准备"></a>前期准备</h1><p>opencv4.0版本也是发布了,以后应该都是用opencv4.0了,现在已经内置了二维码识别模块,但是在写这段代码的时候还是用的是3.0版本,利用了pyzbar模块进行解码</p>
<p>环境准备:</p>
<ul>
<li>opencv4.0(3.0)</li>
<li>python3任意版本</li>
<li>pyzbar</li>
</ul>
<p>详情可以查看<a href="https://blog.csdn.net/dgut_guangdian/article/details/106860637" target="_blank" rel="noopener">我的csdn</a>里面的内容也差不多</p>
<h1 id="初步效果"><a href="#初步效果" class="headerlink" title="初步效果"></a>初步效果</h1><p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/ZengWenJian123/picBed/img/20200713100428.png" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/PMNCHH/cdn@master/2020/07/13/ef0bd27c920cd8184b333429fa86c461.png" alt=""></p>
<p><img src= "/img/loading.gif" data-src="https://cdn.jsdelivr.net/gh/PMNCHH/cdn@master/2020/07/13/dd5735262b51b1911dc4fb016865856a.png" alt=""></p>
<p>这里的话是3个二维码都可以扫描出来,由于qrcode的信息是中文的所以打印在屏幕上会出错请使用<code>matplotlib</code>显示图像。正确信息可以从控制台查看。</p>
<hr>
<h1 id="code"><a href="#code" class="headerlink" title="code"></a>code</h1><figure class="highlight python"><table><tr><td class="code"><pre><span class="line"><span class="keyword">import</span> cv2</span><br><span class="line"><span class="keyword">from</span> pyzbar <span class="keyword">import</span> pyzbar</span><br><span class="line"><span class="comment">#二维码动态识别</span></span><br><span class="line">camera=cv2.VideoCapture(<span class="number">0</span>)</span><br><span class="line">camera.set(<span class="number">3</span>,<span class="number">1280</span>) <span class="comment">#设置分辨率</span></span><br><span class="line">camera.set(<span class="number">4</span>,<span class="number">768</span>)</span><br><span class="line"><span class="keyword">while</span> <span class="literal">True</span>:</span><br><span class="line"> (grabbed,frame)=camera.read()</span><br><span class="line"> <span class="comment">#获取画面中心点</span></span><br><span class="line"> h1,w1= frame.shape[<span class="number">0</span>],frame.shape[<span class="number">1</span>]</span><br><span class="line"> </span><br><span class="line"> <span class="comment"># 纠正畸变(这里把相机标定的代码去除了,各位自行标定吧)</span></span><br><span class="line"> dst = frame</span><br><span class="line"> </span><br><span class="line"> <span class="comment"># 扫描二维码</span></span><br><span class="line"> text = pyzbar.decode(dst)</span><br><span class="line"> <span class="keyword">for</span> texts <span class="keyword">in</span> text:</span><br><span class="line"> textdate = texts.data.decode(<span class="string">'utf-8'</span>)</span><br><span class="line"> print(textdate)</span><br><span class="line"> (x, y, w, h) = texts.rect<span class="comment">#获取二维码的外接矩形顶点坐标</span></span><br><span class="line"> print(<span class="string">'识别内容:'</span>+textdate)</span><br><span class="line"> </span><br><span class="line"> <span class="comment"># 二维码中心坐标</span></span><br><span class="line"> cx = int(x + w / <span class="number">2</span>)</span><br><span class="line"> cy = int(y + h / <span class="number">2</span>)</span><br><span class="line"> cv2.circle(dst, (cx, cy), <span class="number">2</span>, (<span class="number">0</span>, <span class="number">255</span>, <span class="number">0</span>), <span class="number">8</span>) <span class="comment"># 做出中心坐标</span></span><br><span class="line"> print(<span class="string">'中间点坐标:'</span>,cx,cy)</span><br><span class="line"> coordinate=(cx,cy)</span><br><span class="line"> <span class="comment">#在画面左上角写出二维码中心位置</span></span><br><span class="line"> cv2.putText(dst,<span class="string">'QRcode_location'</span>+str(coordinate),(<span class="number">20</span>,<span class="number">20</span>), cv2.FONT_HERSHEY_SIMPLEX, <span class="number">0.5</span>, (<span class="number">0</span>, <span class="number">255</span>, <span class="number">0</span>), <span class="number">2</span>)</span><br><span class="line"> <span class="comment">#画出画面中心与二维码中心的连接线</span></span><br><span class="line"> cv2.line(dst, (cx,cy),(int(w1/<span class="number">2</span>),int(h1/<span class="number">2</span>)), (<span class="number">255</span>, <span class="number">0</span>, <span class="number">0</span>), <span class="number">2</span>)</span><br><span class="line"> <span class="comment">#cv2.rectangle(dst, (x, y), (x + w, y + h), (0, 255, 255), 2) # 做出外接矩形</span></span><br><span class="line"> <span class="comment">#二维码最小矩形</span></span><br><span class="line"> cv2.line(dst, texts.polygon[<span class="number">0</span>], texts.polygon[<span class="number">1</span>], (<span class="number">255</span>, <span class="number">0</span>, <span class="number">0</span>), <span class="number">2</span>)</span><br><span class="line"> cv2.line(dst, texts.polygon[<span class="number">1</span>], texts.polygon[<span class="number">2</span>], (<span class="number">255</span>, <span class="number">0</span>, <span class="number">0</span>), <span class="number">2</span>)</span><br><span class="line"> cv2.line(dst, texts.polygon[<span class="number">2</span>], texts.polygon[<span class="number">3</span>], (<span class="number">255</span>, <span class="number">0</span>, <span class="number">0</span>), <span class="number">2</span>)</span><br><span class="line"> cv2.line(dst, texts.polygon[<span class="number">3</span>], texts.polygon[<span class="number">0</span>], (<span class="number">255</span>, <span class="number">0</span>, <span class="number">0</span>), <span class="number">2</span>)</span><br><span class="line"> <span class="comment">#写出扫描内容</span></span><br><span class="line"> txt = <span class="string">'('</span> + texts.type + <span class="string">') '</span> + textdate</span><br><span class="line"> cv2.putText(dst, txt, (x - <span class="number">10</span>, y - <span class="number">10</span>), cv2.FONT_HERSHEY_SIMPLEX, <span class="number">0.5</span>, (<span class="number">0</span>, <span class="number">50</span>, <span class="number">255</span>), <span class="number">2</span>)</span><br><span class="line"> </span><br><span class="line"> </span><br><span class="line"> cv2.imshow(<span class="string">'dst'</span>,dst)</span><br><span class="line"> <span class="keyword">if</span> cv2.waitKey(<span class="number">1</span>) & <span class="number">0xFF</span> == ord(<span class="string">'q'</span>): <span class="comment"># 按q保存一张图片</span></span><br><span class="line"> cv2.imwrite(<span class="string">"./frame.jpg"</span>, frame)</span><br><span class="line"> <span class="keyword">break</span></span><br><span class="line"> </span><br><span class="line">camera.release()</span><br><span class="line">cv2.destroyAllWindows()</span><br></pre></td></tr></table></figure>
<p>代码直接运行就行,适用于opencv3.x和4.x版本</p>
]]></content>
<tags>
<tag>opencv</tag>
<tag>python</tag>