您的位置:首页 > 运维架构

OpenGL ES 渲染和简单的滤镜效果

2017-12-10 11:59 691 查看
--- Update 12.25 ---

Off-screen Rendering

glPixelStorei

实际上OpenGL也支持使用了这种“对齐”方式的像素数据。只要通过glPixelStore修改“像素保存时对齐的方式”就可以了。像这样: int alignment = 4; glPixelStorei(GL_UNPACK_ALIGNMENT, alignment); 第一个参数表示“设置像素的对齐值”,第二个参数表示实际设置为多少。这里像素可以单字节对齐(实际上就是不使用对齐)、双字节对齐(如果长度为奇数,则再补一个字节)、四字节对齐(如果长度不是四的倍数,则补为四的倍数)、八字节对齐。分别对应alignment的值为1,
2, 4, 8。实际上,默认的值是4,正好与BMP文件的对齐方式相吻合。

--- Init Commit ---

我也是OpenGL ES初学者,文中可能有些概念或者流程理解错误的,还望指正。

总结写的比较粗糙,阅读需要有一定的OpenGL或者OpenGL ES基本知识

先附上效果图:



具体demo地址:https://github.com/DribsAndDrabs1129/OpenGLReview

之前做过OpenGL ES的工程,当时只是为了实现工程,把OpenGL的教程,按照步骤直接套用,没有具体理清流程和简单的原理。后面看了几期叶孤城的从0打造GPUImage,觉得整理的非常好,于是顺着他的思路又重新理了一遍OpenGL ES的内容。

目前就我个人理解程度,使用OpenGL的好处就是,充分利用设备的GPU,减轻CPU的负载;同时可以从更底层的角度对图像进行处理和渲染展示(视频也就是每一帧一帧的图像)。iOS的UIKit就是基于OpenGL ES封装的(参见iOS开发,视图渲染与性能优化)。所以,了解OpenGL
ES的基本工作原理,shader等的知识很有必要。

Apple本身就提供了GLKView这个类,就是基于OpenGL的View。《OpenGL+ES应用开发实践指南:iOS卷 》从开篇就引导读者使用GLKView,并且为了方便读者理解里面的使用原理,自己创建了AGLKContext,AGLKVertexAttribArrayBuffer等等类。其主要流程是:

1. 设置GLContext,上下文,初始化OpenGL API的版本,常用的是OpenGLES2;

GLKView *view = (GLKView *)self.view;
NSAssert([viewisKindOfClass:[GLKView class]],
@"View controller's view is not a GLKView");

view.context = [[EAGLContextalloc]
initWithAPI:kEAGLRenderingAPIOpenGLES2];

[EAGLContextsetCurrentContext:view.context];

2. 设置GLKBaseEffect,关于OpenGL的一些效果图,可以翻看Apple官方对于GLKBaseEffect的表述:“ GLKBaseEffect is designed to simplify visual effects common to many OpenGL applications today.  For iOS, GLKBaseEffect requires at least OpenGL ES 2.0
and for OS X, GLKBaseEffect requires at least an OpenGL Core Profile.”

其常用属性如下所示:

以及一些初始化背景色之类的设置,    

GLKVector4 clearColorRGBA = GLKVector4Make(1.0f, 1.0f, 1.0f, 1.0f);// RGBA
glClearColor(clearColorRGBA.r,clearColorRGBA.g,clearColorRGBA.b, clearColorRGBA.a);


3. 接下来就是设置buffer缓冲和使用了(OpenGL ES学习笔记(一):相关基本概念,可以参考这篇文章)

主要是生成,绑定,缓存,启用,设置指针,绘制和删除;

4.根据图片或图片数据等生成OpenGL纹理,进行渲染(这个比较傻瓜式,把生成的纹理ID和target都赋值给baseEffect,具体操作就不用管了);

在GLKView里面,会自动周期性调用这个方法

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect;


所以,第5不绘制放到这个方法里面

// Clear back frame buffer (erase previous drawing)
glClear(GL_COLOR_BUFFER_BIT);

glBindBuffer(GL_ARRAY_BUFFER,_Buffer);// STEP 2
glEnableVertexAttribArray(GLKVertexAttribPosition);// Step 4
// Step 5
glVertexAttribPointer(GLKVertexAttribPosition,               // Identifies the attribute to use
3,               // number of coordinates for attribute
GL_FLOAT,            // data is floating point
GL_FALSE,            // no fixed point scaling
sizeof(SceneVertex),         // total num bytes stored per vertex
NULL + offsetof(SceneVertex, positionCoords));      // offset from start of each vertex to
// first coord for attribute

glBindBuffer(GL_ARRAY_BUFFER, _Buffer);// STEP 2
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);// Step 4
// Step 5
glVertexAttribPointer(GLKVertexAttribTexCoord0,               // Identifies the attribute to use
2,               // number of coordinates for attribute
GL_FLOAT,            // data is floating point
GL_FALSE,            // no fixed point scaling
sizeof(SceneVertex),         // total num bytes stored per vertex
NULL + offsetof(SceneVertex, textureCoords));      // offset from start of each vertex to first coord for attribute

// Step 6
// Draw triangles using the first three vertices in the
// currently bound vertex buffer
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);


另外就是吴彦祖(叶孤城)总结的带shader OpenGL ES使用,他没有用到GLK相关的类。所以开始有些东西需要自己去设置和指定:

1. 设置OpenGL context,layer层绘制属性,并添加到view的layer层上

/***  设置上下文   ***/
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]; //opengl es 2.0
[EAGLContext setCurrentContext:_eaglContext]; //设置为当前上下文。

/***  添加layer层   ***/
_eaglLayer = [CAEAGLLayer layer];
_eaglLayer.frame = self.view.bounds;
_eaglLayer.backgroundColor = [UIColor yellowColor].CGColor;
_eaglLayer.opaque = YES;

_eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:NO],kEAGLDrawablePropertyRetainedBacking,kEAGLColorFormatRGBA8,kEAGLDrawablePropertyColorFormat, nil];
[self.view.layer addSublayer:_eaglLayer];


关于drawableProperties有定义解释:

/************************************************************************/
/* Keys for EAGLDrawable drawableProperties dictionary                  */
/*                                                                      */
/* kEAGLDrawablePropertyRetainedBacking:                                */
/*  Type: NSNumber (boolean)                                            */
/*  Legal Values: True/False                                            */
/*  Default Value: False                                                */
/*  Description: True if EAGLDrawable contents are retained after a     */
/*               call to presentRenderbuffer.  False, if they are not   */
/*                                                                      */
/* kEAGLDrawablePropertyColorFormat:                                    */
/*  Type: NSString                                                      */
/*  Legal Values: kEAGLColorFormat*                                     */
/*  Default Value: kEAGLColorFormatRGBA8                                */
/*  Description: Format of pixels in renderbuffer                       */
/************************************************************************/


2. 设置帧缓存和渲染缓存

/***  清除帧缓存和渲染缓存   ***/
if (_renderBuffer) {
glDeleteRenderbuffers(1, &_renderBuffer);
_renderBuffer = 0;
}

if (_frameBuffer) {
glDeleteFramebuffers(1, &_frameBuffer);
_frameBuffer = 0;
}

/***  设置帧缓存和渲染缓存   ***/
glGenFramebuffers(1, &_frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);

glGenRenderbuffers(1, &_renderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _renderBuffer);

glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _renderBuffer);
    /* Attaches an EAGLDrawable as storage for the OpenGL ES renderbuffer object bound to <target> */
[_eaglContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:_eaglLayer];

GLint width = 0;
GLint height = 0;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);
//check success
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
NSLog(@"Failed to make complete framebuffer object: %i", glCheckFramebufferStatus(GL_FRAMEBUFFER));
}

3. 设置视角和背景颜色

glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glViewport(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);

4. compile shader,编译shader文件

- (void)setShader{
GLuint vertexShaderName = [self compileShader:@"vertexShader.vsh" withType:GL_VERTEX_SHADER];
//    GLuint fragmenShaderName = [self compileShader:@"fragmentShader.fsh" withType:GL_FRAGMENT_SHADER];
GLuint fragmenShaderName = [self compileShader:@"luminance.fsh" withType:GL_FRAGMENT_SHADER];

_programHandle = glCreateProgram();
glAttachShader(_programHandle, vertexShaderName);
glAttachShader(_programHandle, fragmenShaderName);

glLinkProgram(_programHandle);

GLint linkSuccess;
glGetProgramiv(_programHandle, GL_LINK_STATUS, &linkSuccess);
if (linkSuccess == GL_FALSE) {
GLchar messages[256];
glGetProgramInfoLog(_programHandle, sizeof(messages), 0, &messages[0]);
NSString *messageString = [NSString stringWithUTF8String:messages];
NSLog(@"%@", messageString);
exit(1);
}

_positionSlot = glGetAttribLocation(_programHandle,[@"in_Position" UTF8String]);
_textureSlot = glGetUniformLocation(_programHandle, [@"in_Texture" UTF8String]);
_textureCoordSlot = glGetAttribLocation(_programHandle, [@"in_TexCoord" UTF8String]);
_colorSlot = glGetAttribLocation(_programHandle, [@"in_Color" UTF8String]);
_Saturation_brightness = glGetAttribLocation(_programHandle, [@"in_Saturation_Brightness" UTF8String]);
_enableGrayScale = glGetAttribLocation(_programHandle, [@"in_greyScale" UTF8String]);
_enableNegation = glGetAttribLocation(_programHandle, [@"in_negation" UTF8String]);

glUseProgram(_programHandle);
}

- (GLuint)compileShader:(NSString *)shaderName withType:(GLenum)shaderType {
NSString *path = [[NSBundle mainBundle] pathForResource:shaderName ofType:nil];
NSError *error = nil;
NSString *shaderString = [NSString stringWithContentsOfFile:path encoding:NSUTF8StringEncoding error:&error];
if (!shaderString) {
NSLog(@"%@", error.localizedDescription);
}

const char * shaderUTF8 = [shaderString UTF8String];
GLint shaderLength = (GLint)[shaderString length];
GLuint shaderHandle = glCreateShader(shaderType);
glShaderSource(shaderHandle, 1, &shaderUTF8, &shaderLength);
glCompileShader(shaderHandle);

GLint compileSuccess;
glGetShaderiv(shaderHandle, GL_COMPILE_STATUS, &compileSuccess);
if (compileSuccess == GL_FALSE) {
GLchar message[256];
glGetShaderInfoLog(shaderHandle, sizeof(message), 0, &message[0]);
NSString *messageString = [NSString stringWithUTF8String:message];
NSLog(@"%@", messageString);
exit(1);
}
return shaderHandle;
}

这一段是参考吴彦祖的shader编译,其实网上的都大同小异:

搜索工程中shader文件,以字符串的方式读取出来,转换成UTF8String,然后交由OpenGL编译

_positionSlot = glGetAttribLocation(_programHandle,[@"in_Position" UTF8String]);
_textureSlot = glGetUniformLocation(_programHandle, [@"in_Texture" UTF8String]);
_textureCoordSlot = glGetAttribLocation(_programHandle, [@"in_TexCoord" UTF8String]);
_colorSlot = glGetAttribLocation(_programHandle, [@"in_Color" UTF8String]);
_Saturation_brightness = glGetAttribLocation(_programHandle, [@"in_Saturation_Brightness" UTF8String]);
_enableGrayScale = glGetAttribLocation(_programHandle, [@"in_greyScale" UTF8String]);
_enableNegation = glGetAttribLocation(_programHandle, [@"in_negation" UTF8String]);


这几个都是shader的入参:

_positionSlot是vertex position

_textureCoordSlot是对应的texture坐标

_textureSlot是传入的纹理

_colorSlot为片元颜色

_Saturation_brightness 是我自己添加的参数,用于控制颜色饱和度

_enableGrayScale 自定义参数,灰度开关(值为0或1)

_enableNegation 自定义参数,取反开关(0或1,效果类似于相机底片)

5. 设置纹理

- (void)setTexture{
glDeleteTextures(1, &texName);

/***  Generate Texture   ***/
texName = [self getTextureFromImage:[UIImage imageNamed:picName]];

/***  Bind Texture   ***/
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texName);
glUniform1i(_textureSlot, 1);
}

- (GLuint)getTextureFromImage:(UIImage *)image {
CGImageRef imageRef = [image CGImage];
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
GLubyte* textureData = (GLubyte *)malloc(width * height * 4);

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(textureData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0, height);
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);

glEnable(GL_TEXTURE_2D);
GLuint texName;
glGenTextures(1, &texName);
glBindTexture(GL_TEXTURE_2D, texName);

glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (GLsizei)width, (GLsizei)height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
glBindTexture(GL_TEXTURE_2D, 0);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(textureData);
return texName;
}


相比于第一种方法使用的GLKTextureLoader,这里相当于GLKTextureLoader背后的具体实现,创建图像色彩空间是为了对图像进行重绘,由于iOS中坐标系的原因,直接绘制上的图像是上下左右颠倒的(可以注释掉CGContextSacaleCTM和CGContextTranslateCTM看看最终的渲染效果)。
与缓存类似,首先是激活纹理,然后产生纹理,绑定纹理,指定纹理textureData。

6. 绘制

- (void)drawTrangle {
UIImage *image = [UIImage imageNamed:picName];
CGRect realRect = AVMakeRectWithAspectRatioInsideRect(image.size, self.view.bounds);
CGFloat widthRatio = realRect.size.width/self.view.bounds.size.width;
CGFloat heightRatio = realRect.size.height/self.view.bounds.size.height;

//    const GLfloat vertices[] = {
//        -1, -1, 0,   //左下
//        1,  -1, 0,   //右下
//        -1, 1,  0,   //左上
//        1,  1,  0 }; //右上
const GLfloat vertices[] = {
-widthRatio, -heightRatio, 0,   //左下
widthRatio,  -heightRatio, 0,   //右下
-widthRatio, heightRatio,  0,   //左上
widthRatio,  heightRatio,  0 }; //右上
glEnableVertexAttribArray(_positionSlot);
glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE, 0, vertices);

// normal
static const GLfloat coords[] = {
0, 0,
1, 0,
0, 1,
1, 1
};

glEnableVertexAttribArray(_textureCoordSlot);
glVertexAttribPointer(_textureCoordSlot, 2, GL_FLOAT, GL_FALSE, 0, coords);

static const GLfloat colors[] = {
1, 0, 0, 1,
0, 0, 0, 1,
0, 0, 0, 1,
1, 0, 0, 1
};

glEnableVertexAttribArray(_colorSlot);
glVertexAttribPointer(_colorSlot, 4, GL_FLOAT, GL_FALSE, 0, colors);

//亮度,色度
GLfloat saturation_brightness[] = {
saturationPara, brightnessPara,
saturationPara, brightnessPara,
saturationPara, brightnessPara,
saturationPara, brightnessPara
};
glEnableVertexAttribArray(_Saturation_brightness);
glVertexAttribPointer(_Saturation_brightness, 2, GL_FLOAT, GL_FALSE, 0, saturation_brightness);

//灰度图
GLfloat grayScale[] = {
grayScalePara,
grayScalePara,
grayScalePara,
grayScalePara
};
glEnableVertexAttribArray(_enableGrayScale);
glVertexAttribPointer(_enableGrayScale, 1, GL_FLOAT, GL_FALSE, 0, grayScale);

//取反
GLfloat negation[] = {
negationPara,
negationPara,
negationPara,
negationPara
};
glEnableVertexAttribArray(_enableNegation);
glVertexAttribPointer(_enableNegation, 1, GL_FLOAT, GL_FALSE, 0, negation);

glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
[_eaglContext presentRenderbuffer:GL_RENDERBUFFER];
}
绘制时,基本上按照编译shader里面的参数来传入,每个参数首先是enable vertexAttribute,然后添加指针。

常见的OpenGL顶点坐标应该是 :

-1, -1, 0,   //左下

 1,  -1, 0,   //右下

-1, 1,  0,   //左上

1,  1,  0 }; //右上

但是你不能保证绘制的图片和iPhone屏幕的长宽比是一直的,所以,一般图片会进行缩放来适配屏幕。使用AVMakeRectWithAspectRatioInsideRect函数可以返回图片在屏幕内最大的rect,具体description:

/*!
@function					AVMakeRectWithAspectRatioInsideRect
@abstract					Returns a scaled CGRect that maintains the aspect ratio specified by a CGSize within a bounding CGRect.
@discussion				This is useful when attempting to fit the presentationSize property of an AVPlayerItem within the bounds of another CALayer.
You would typically use the return value of this function as an AVPlayerLayer frame property value. For example:
myPlayerLayer.frame = AVMakeRectWithAspectRatioInsideRect(myPlayerItem.presentationSize, mySuperLayer.bounds);
@param aspectRatio			The width & height ratio, or aspect, you wish to maintain.
@param	boundingRect		The bounding CGRect you wish to fit into.
*/

关于glDrawArrays (GLenum mode, GLint first, GLsizei count);,绘制时会有不同的绘制模式。其实OpenGL就是绘制三角面片,每个三角面片是绘制的最基本单位。甚至三维立体模型,看着光滑的球面,也是由无数三角面片,不断细分,最终趋近光滑。

所以一般都是每3个点一组来绘制三角面片,iPhone矩形的屏幕可以由两个三角面片组成。具体的绘制mode,可以参考OpenGL基本图元转换为GL_TRIANGLES

关于shader编程

感觉shader编程比较坑,对变量,数据类型要求都很严格,必须一一对应,所以,写shader的时候要格外小心。

可以参考OpenGL中shader使用

demo里面还写了关于实时处理摄像头采集的数据,用OpenGL渲染到view上,具体流程跟1和2差不多,有三点需要注意:

1.摄像头采集的图像默认是逆时针旋转90°的,但是AVCaptureVideoPreviewLayer层是正常的,怀疑Apple在自己闷着处理了。如果需要自己用OpenGL渲染的话,需要将图像顺时针旋转90°。具体解决方法可以在shader里面对纹理进行调整,见demo;

2.capture的output有个videoSettings属性,可以设置输出视频的格式。翻看前文,可以知道,OpenGL的glTexImage2D支持GL_RGBA,但是不支持GL_BGRA。但是Apple的video输出可以选择kCVPixelFormatType_32BGRA和kCVPixelFormatType_32RGBA。

不要高兴的太早了,kCVPixelFormatType_32RGBA有一部分设备不支持。所以shader里面要支持BGRA转RGBA这种,见demo;

3.使用OpenGL的时候一定要注意内存问题,不需要的渲染缓存和纹理要删除,不然会导致内存暴涨;

关于亮度,饱和度,灰度图,涉及到图像相关

灰度图就是将每个像素的RGB值,按照一定权值进行计算,然后分别赋给RGB所得到的;

亮度最简单的方法就是在RGB三个颜色值上加相同的亮度值,RGB的取值范围0~1,超过部分置1;

饱和度,根据图片的RGB信息,生成了一个灰度的greyScaleColor。最后根据用户输入的saturation的值,调整颜色。因为mix(greyScaleColor, textureColor.rgb, saturation) = greyScaleColor * (1-saturation ) + textureColor.rgb * saturation

取反,是就取出每个像素点的RGB值,然后用1 - R = R,类似得到G值和B值,得到效果类似于相机底片
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: