您的位置:首页 > 移动开发 > 微信开发

Android高仿微信/支付宝 扫一扫(弱光检测扫一扫自动放大功能)

2017-11-26 18:48 417 查看
目前市面上App携带的扫一扫功能大多是乞丐版,怎么说,就是只有扫一扫.而目前来说扫一扫做的最好的还是微信,微信有弱光环境的检测(可以自动提示用户打开闪光灯),同时,当发现扫描目标距离过远时,还可以自动的放大镜头,亲测可以多次的放大,所以说细节决定成败,支付宝虽然也有微信的功能,但是我觉得支付宝的弱光做的一般,自动放大也有点鸡肋,不过也很不错了,毕竟一般来说,实现扫一扫乞丐版就基本完事了,而我也遇到了这个需求,就是要实现微信和支付宝类似的效果.


效果图走一波(用的gif大师,录制的质量比较低,质量过高的传不上去,见谅)

第一帧gif当为弱光时,动态显示“手电筒”,点击打开后,一直显示“关系手电筒”.

第二个gif帧就是扫一扫自动放大的效果.




需求分析

1.中间的frame框就不说了,比较的简单,ondraw里边修改,用安卓纯纯的坐标系,就可以实现.

2.弱光检测: 这块我花了两天的时间研究,ios获取后置摄像头的光感比较的方便,几行代码就可以获取,他们的是brightnessvalue这个值;而安卓第一版我用的光传感器,你要知道,光传感器是在前置摄像头附近,而扫一扫是用后置摄像头来扫描的,光传感器晚上是没有问题的,白天不是非常的精确,就放弃了这个方案,最后查了相关的资料我使用jpegReader.metadata(),exifinterface来读取实时帧流,均以失败告终,我想Camera2应该提供了某些的api,但是要求是5.0之后了,我也就没有细研究,之后,我看到支付宝的效果后,我就明白了,他分析的是后摄像头拍照的图片颜色来区分的,多次尝试发现,是这样,同理,微信应该也是类似的实现,只不过他调的比较细,优化的比较好而已.

3.扫一扫自动放大:这个你思考下,其实也很简单,Camera有放大的属性,无非是触发条件怎么来判断,微信扫一扫是当镜头中有二维码的是才会进行自动放大,并且会多次的放大.


代码实现

我们项目用的是zxing,不用说了要修改源码.

ui层就不说了,真的简单,安卓坐标系,cavas 画布api,来绘制rect区域,在ViewFindView这个类里边的onDraw方法修改即可.


弱光检测

上面分析完后,就知道了,咱们要实时的分析图片的颜色值(agb值),既然说到了实时的分析,我们就要找到二维码处理解码实时帧的方法,zxing使用decodeThread,decodeHanlder,decodeThread线程不断的分析流并解码.

[java] view
plain copy

/**

* Decode the data within the viewfinder rectangle, and time how long it took. For efficiency,

* reuse the same reader objects from one decode to the next.

*

* @param data The YUV preview frame.

* @param width The width of the preview frame.

* @param height The height of the preview frame.

*/

private void decode(byte[] data, int width, int height)

这个data是YUV格式的,谷歌也提供了相关的转换方法Yuvimage.

将YUV转换为agb方法(网上摘抄,天下文章一大抄)

[java] view
plain copy

private int[] decodeYUV420SP(byte[] yuv420sp, int width, int height) {

final int frameSize = width * height;

int rgb[] = new int[width * height];

for (int j = 0, yp = 0; j < height; j++) {

int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;

for (int i = 0; i < width; i++, yp++) {

int y = (0xff & ((int) yuv420sp[yp])) - 16;

if (y < 0) y = 0;

if ((i & 1) == 0) {

v = (0xff & yuv420sp[uvp++]) - 128;

u = (0xff & yuv420sp[uvp++]) - 128;

}

int y1192 = 1192 * y;

int r = (y1192 + 1634 * v);

int g = (y1192 - 833 * v - 400 * u);

int b = (y1192 + 2066 * u);

if (r < 0) r = 0;

else if (r > 262143) r = 262143;

if (g < 0) g = 0;

else if (g > 262143) g = 262143;

if (b < 0) b = 0;

else if (b > 262143) b = 262143;

rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) &

0xff00) | ((b >> 10) & 0xff);

}

}

return rgb;

}

使用Bitmap.createBitmap转换为bitmap图片下,分析图片颜色的平均值,颜色都是16进制的,不懂的可以网上搜下,黑色的颜色 对应int = -16777216, 所以我们认为当前的平均值 / black(-16777216) 小于等于1 同时大于0.99,就认为是弱光(这个值还可以调节)

[java] view
plain copy

private int getAverageColor(Bitmap bitmap) {

int redBucket = 0;

int greenBucket = 0;

int blueBucket = 0;

int pixelCount = 0;

for (int y = 0; y < bitmap.getHeight(); y++) {

for (int x = 0; x < bitmap.getWidth(); x++) {

int c = bitmap.getPixel(x, y);

pixelCount++;

redBucket += Color.red(c);

greenBucket += Color.green(c);

blueBucket += Color.blue(c);

}

}

int averageColor = Color.rgb(redBucket / pixelCount, greenBucket

/ pixelCount, blueBucket / pixelCount);

return averageColor;

}

最终的方法,为了防止内存的溢出,取当前帧的八分之一宽高获取agb数组,同时用bmp的八分之一来分析颜色的平均值,分析完后直接释放bitmap.

[java] view
plain copy

//分析预览帧中图片的arg 取平均值

private void analysisColor(byte[] data, int width, int height) {

int[] rgb = decodeYUV420SP(data, width / 8, height / 8);

Bitmap bmp = Bitmap.createBitmap(rgb, width / 8, height / 8, Bitmap.Config.ARGB_8888);

if (bmp != null) {

//取以中心点宽高10像素的图片来分析

Bitmap resizeBitmap = Bitmap.createBitmap(bmp, bmp.getWidth() / 2, bmp.getHeight() / 2, 10, 10);

float color = (float) getAverageColor(resizeBitmap);

DecimalFormat decimalFormat1 = new DecimalFormat("0.00");

String percent = decimalFormat1.format(color / -16777216);

float floatPercent = Float.parseFloat(percent);

Constants.isWeakLight = floatPercent >= 0.99 && floatPercent <= 1.00;

if (null != resizeBitmap) {

resizeBitmap.recycle();

}

bmp.recycle();

}

}

上述基本实现了弱光的检测,还可以进行微调,都是自己来控制的.


扫一扫自动放大的功能

二维码携带有坐标数据,根据坐标算出二维码的矩形大小并和当前frame边框的坐标进行比对,来进行放大,目前看微信好像也是这样实现的,不过弊端是什么,就是我是扫描出来这个界面结果后进行放大的,有点多此一举的感觉,目前先这样,后续可以根据时间来优化或修改吧.代码如下:

[java] view
plain copy

/*

* Copyright (C) 2010 ZXing authors

*

* Licensed under the Apache License, Version 2.0 (the "License");

* you may not use this file except in compliance with the License.

* You may obtain a copy of the License at

*

* http://www.apache.org/licenses/LICENSE-2.0
*

* Unless required by applicable law or agreed to in writing, software

* distributed under the License is distributed on an "AS IS" BASIS,

* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

* See the License for the specific language governing permissions and

* limitations under the License.

*/

package com.google.zxing.client.android;

import com.google.zxing.BinaryBitmap;

import com.google.zxing.DecodeHintType;

import com.google.zxing.MultiFormatReader;

import com.google.zxing.ReaderException;

import com.google.zxing.Result;

import com.google.zxing.common.Constants;

import com.google.zxing.common.HybridBinarizer;

import android.graphics.Bitmap;

import android.graphics.Color;

import android.graphics.Rect;

import android.hardware.Camera;

import android.os.Bundle;

import android.os.Handler;

import android.os.Looper;

import android.os.Message;

import android.util.Log;

import java.text.DecimalFormat;

import java.util.Map;

final class DecodeHandler extends Handler {

private static final String TAG = DecodeHandler.class.getSimpleName();

private final CaptureActivity activity;

private final MultiFormatReader multiFormatReader;

private boolean running = true;

private int frameCount;

DecodeHandler(CaptureActivity activity, Map<DecodeHintType, Object> hints) {

multiFormatReader = new MultiFormatReader();

multiFormatReader.setHints(hints);

this.activity = activity;

}

@Override

public void handleMessage(Message message) {

if (!running) {

return;

}

if (message.what == R.id.decode) {

decode((byte[]) message.obj, message.arg1, message.arg2);

} else if (message.what == R.id.quit) {

running = false;

Looper.myLooper().quit();

}

}

/**

* Decode the data within the viewfinder rectangle, and time how long it took. For efficiency,

* reuse the same reader objects from one decode to the next.

*

* @param data The YUV preview frame.

* @param width The width of the preview frame.

* @param height The height of the preview frame.

*/

private void decode(byte[] data, int width, int height) {

byte[] rotatedData = new byte[data.length];

for (int y = 0; y < height; y++) {

for (int x = 0; x < width; x++)

rotatedData[x * height + height - y - 1] = data[x + y * width];

}

frameCount++;

//丢弃前2帧并每隔2帧分析下预览帧color值

if (frameCount > 2 && frameCount % 2 == 0) {

analysisColor(rotatedData, width, height);

}

long start = System.currentTimeMillis();

Result rawResult = null;

final PlanarYUVLuminanceSource source = activity.getCameraManager().buildLuminanceSource(rotatedData, height, width);

if (source != null) {

BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));

try {

rawResult = multiFormatReader.decodeWithState(bitmap);

} catch (ReaderException re) {

// continue

} finally {

multiFormatReader.reset();

}

}

final Handler handler = activity.getHandler();

if (rawResult != null) {

// Don't log the barcode contents for security.

long end = System.currentTimeMillis();

Log.d(TAG, "Found barcode in " + (end - start) + " ms");

if (handler != null) {

float point1X = rawResult.getResultPoints()[0].getX();

float point1Y = rawResult.getResultPoints()[0].getY();

float point2X = rawResult.getResultPoints()[1].getX();

float point2Y = rawResult.getResultPoints()[1].getY();

int len = (int) Math.sqrt(Math.abs(point1X - point2X) * Math.abs(point1X - point2X) + Math.abs(point1Y - point2Y) * Math.abs(point1Y - point2Y));

Rect frameRect = activity.getCameraManager().getFramingRect();

if (frameRect != null) {

int frameWidth = frameRect.right - frameRect.left;

Camera camera = activity.getCameraManager().getCameraNotStatic();

Camera.Parameters parameters = camera.getParameters();

final int maxZoom = parameters.getMaxZoom();

int zoom = parameters.getZoom();

if (parameters.isZoomSupported()) {

if (len <= frameWidth / 4) {

if (zoom == 0) {

zoom = maxZoom / 3;

} else {

zoom = zoom + 5;

}

if (zoom > maxZoom) {

zoom = maxZoom;

}

parameters.setZoom(zoom);

camera.setParameters(parameters);

final Result finalRawResult = rawResult;

postDelayed(new Runnable() {

@Override

public void run() {

Message message = Message.obtain(handler, R.id.decode_succeeded, finalRawResult);

Bundle bundle = new Bundle();

bundle.putParcelable(DecodeThread.BARCODE_BITMAP, source.renderCroppedGreyscaleBitmap());

message.setData(bundle);

message.sendToTarget();

}

}, 1000);

} else {

Message message = Message.obtain(handler, R.id.decode_succeeded, rawResult);

Bundle bundle = new Bundle();

bundle.putParcelable(DecodeThread.BARCODE_BITMAP, source.renderCroppedGreyscaleBitmap());

message.setData(bundle);

message.sendToTarget();

}

}

} else {

Message message = Message.obtain(handler, R.id.decode_succeeded, rawResult);

Bundle bundle = new Bundle();

bundle.putParcelable(DecodeThread.BARCODE_BITMAP, source.renderCroppedGreyscaleBitmap());

message.setData(bundle);

message.sendToTarget();

}

}

} else {

if (handler != null) {

Message message = Message.obtain(handler, R.id.decode_failed);

message.sendToTarget();

}

}

}

//分析预览帧中图片的arg 取平均值

private void analysisColor(byte[] data, int width, int height) {

int[] rgb = decodeYUV420SP(data, width / 8, height / 8);

Bitmap bmp = Bitmap.createBitmap(rgb, width / 8, height / 8, Bitmap.Config.ARGB_8888);

if (bmp != null) {

//取以中心点宽高10像素的图片来分析

Bitmap resizeBitmap = Bitmap.createBitmap(bmp, bmp.getWidth() / 2, bmp.getHeight() / 2, 10, 10);

float color = (float) getAverageColor(resizeBitmap);

DecimalFormat decimalFormat1 = new DecimalFormat("0.00");

String percent = decimalFormat1.format(color / -16777216);

float floatPercent = Float.parseFloat(percent);

Constants.isWeakLight = floatPercent >= 0.99 && floatPercent <= 1.00;

if (null != resizeBitmap) {

resizeBitmap.recycle();

}

bmp.recycle();

}

}

private int[] decodeYUV420SP(byte[] yuv420sp, int width, int height) {

final int frameSize = width * height;

int rgb[] = new int[width * height];

for (int j = 0, yp = 0; j < height; j++) {

int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;

for (int i = 0; i < width; i++, yp++) {

int y = (0xff & ((int) yuv420sp[yp])) - 16;

if (y < 0) y = 0;

if ((i & 1) == 0) {

v = (0xff & yuv420sp[uvp++]) - 128;

u = (0xff & yuv420sp[uvp++]) - 128;

}

int y1192 = 1192 * y;

int r = (y1192 + 1634 * v);

int g = (y1192 - 833 * v - 400 * u);

int b = (y1192 + 2066 * u);

if (r < 0) r = 0;

else if (r > 262143) r = 262143;

if (g < 0) g = 0;

else if (g > 262143) g = 262143;

if (b < 0) b = 0;

else if (b > 262143) b = 262143;

rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) &

0xff00) | ((b >> 10) & 0xff);

}

}

return rgb;

}

private int getAverageColor(Bitmap bitmap) {

int redBucket = 0;

int greenBucket = 0;

int blueBucket = 0;

int pixelCount = 0;

for (int y = 0; y < bitmap.getHeight(); y++) {

for (int x = 0; x < bitmap.getWidth(); x++) {

int c = bitmap.getPixel(x, y);

pixelCount++;

redBucket += Color.red(c);

greenBucket += Color.green(c);

blueBucket += Color.blue(c);

}

}

int averageColor = Color.rgb(redBucket / pixelCount, greenBucket

/ pixelCount, blueBucket / pixelCount);

return averageColor;

}

}
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: