您的位置:首页 > 产品设计 > UI/UE

iOS 多种截屏功能代码[UIKit and opengles]

2015-02-06 17:06 477 查看
在以前的ios项目中都是用的下面的方式截屏:

CGSize imageSize = [[UIScreen mainScreen] bounds].size;

if (NULL != UIGraphicsBeginImageContextWithOptions) {

UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);

}

else

{

UIGraphicsBeginImageContext(imageSize);

}

[[self.view layer] renderInContext:UIGraphicsGetCurrentContext()];

UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

最近在做一个项目,有lbs内容,用到了百度地图。

需要对当前坐标位置做标志,然后分享截图,同样使用上面的方式,碰到问题。

然后使用以下苹果开发文档写的截屏方式,也始终不行,

只能得到地区上自定义的一个位置tip图片,后面的地图内容都是白色的。

苹果文档的内容:http://developer.apple.com/library/ios/#qa/qa1703/_index.html

Screen Capture in UIKit Applications

Q: How do I take a screenshot in my UIKit application?

- (UIImage*)screenshot

{

// Create a graphics context with the target size

// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration

// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext

CGSize imageSize = [[UIScreen mainScreen] bounds].size;

if (NULL != UIGraphicsBeginImageContextWithOptions)

UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);

else

UIGraphicsBeginImageContext(imageSize);

CGContextRef context = UIGraphicsGetCurrentContext();

// Iterate over every window from back to front

for (UIWindow *window in [[UIApplication sharedApplication] windows])

{

if (![window respondsToSelector:@selector(screen)] || [window screen] == [UIScreen mainScreen])

{

// -renderInContext: renders in the coordinate space of the layer,

// so we must first apply the layer's geometry to the graphics context

CGContextSaveGState(context);

// Center the context around the window's anchor point

CGContextTranslateCTM(context, [window center].x, [window center].y);

// Apply the window's transform about the anchor point

CGContextConcatCTM(context, [window transform]);

// Offset by the portion of the bounds left of and above the anchor point

CGContextTranslateCTM(context,

-[window bounds].size.width * [[window layer] anchorPoint].x,

-[window bounds].size.height * [[window layer] anchorPoint].y);

// Render the layer hierarchy to the current context

[[window layer] renderInContext:context];

// Restore the context

CGContextRestoreGState(context);

}

}

// Retrieve the screenshot image

UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

return image;

}

仔细想了下,百度地图不是用UIKit实现的,所以用上面的方式截图行不通。

从说明文档中,链接到OpenGL ES View Snapshot,找到了问题解决方法。

苹果文档的内容:http://developer.apple.com/library/ios/#qa/qa1704/_index.html

OpenGL ES View Snapshot

Q: How do I take a snapshot of my OpenGL ES view and save the result in a UIImage?

// IMPORTANT: Call this method after you draw and before -presentRenderbuffer:.

- (UIImage*)snapshot:(UIView*)eaglview

{

GLint backingWidth, backingHeight;

// Bind the color renderbuffer used to render the OpenGL ES view

// If your application only creates a single color renderbuffer which is already bound at this point,

// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.

// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.

glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);

// Get the size of the backing CAEAGLLayer

glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);

glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);

NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;

NSInteger dataLength = width * height * 4;

GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));

// Read pixel data from the framebuffer

glPixelStorei(GL_PACK_ALIGNMENT, 4);

glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);

// Create a CGImage with the pixel data

// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel

// otherwise, use kCGImageAlphaPremultipliedLast

CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);

CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();

CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,

ref, NULL, true, kCGRenderingIntentDefault);

// OpenGL ES measures data in PIXELS

// Create a graphics context with the target size measured in POINTS

NSInteger widthInPoints, heightInPoints;

if (NULL != UIGraphicsBeginImageContextWithOptions) {

// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration

// Set the scale parameter to your OpenGL ES view's contentScaleFactor

// so that you get a high-resolution snapshot when its value is greater than 1.0

CGFloat scale = eaglview.contentScaleFactor;

widthInPoints = width / scale;

heightInPoints = height / scale;

UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);

}

else {

// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext

widthInPoints = width;

heightInPoints = height;

UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));

}

CGContextRef cgcontext = UIGraphicsGetCurrentContext();

// UIKit coordinate system is upside down to GL/Quartz coordinate system

// Flip the CGImage by rendering it to the flipped bitmap context

// The size of the destination area is measured in POINTS

CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);

CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);

// Retrieve the UIImage from the current context

UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

// Clean up

free(data);

CFRelease(ref);

CFRelease(colorspace);

CGImageRelease(iref);

return image;

}

上面的代码中存在一些定义,我直接跳过,使用了网上别人整理的一段代码,呵呵。

使用opengles截图如下:

-(UIImage *) glToUIImage {

NSInteger myDataLength = 1024 * 768 * 4; //1024-width,768-height

// allocate array and read pixels into it.

GLubyte *buffer = (GLubyte *) malloc(myDataLength);

glReadPixels(0, 0, 1024, 768, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

// gl renders "upside down" so swap top to bottom into new array.

// there's gotta be a better way, but this works.

GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);

for(int y = 0; y <768; y++)

{

for(int x = 0; x <1024 * 4; x++)

{

buffer2[(767 - y) * 1024 * 4 + x] = buffer[y * 4 * 1024 + x];

}

}

// make data provider with data.

CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);

// prep the ingredients

int bitsPerComponent = 8;

int bitsPerPixel = 32;

int bytesPerRow = 4 * 1024;

CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();

CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;

CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;

// make the cgimage

CGImageRef imageRef = CGImageCreate(1024, 768, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);

// then make the uiimage from that

UIImage *myImage = [UIImage imageWithCGImage:imageRef];

return myImage;

}

通过上面的代码终于是截屏能够得到百度地图的内容了。不过还有个小问题,就是opengles截图不能得到UIKit的图片内容。

所以我又做了个小处理,就是截屏得到图片后,对两张图片做合成叠加处理。

//合并图片

-(UIImage *)mergerImage:(UIImage *)firstImage secodImage:(UIImage *)secondImage{

CGSize imageSize = CGSizeMake(620, 380);

UIGraphicsBeginImageContext(imageSize);

[firstImage drawInRect:CGRectMake(0, 0, firstImage.size.width, firstImage.size.height)];

[secondImage drawInRect:CGRectMake(310 - 40, 190 - 60, secondImage.size.width, secondImage.size.height)];

UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

return resultImage;

}

----------------------------------华丽丽的分割线----------------------------------

单独的UIKit截屏和单独的OpenGL ES截屏相信大家一看就弄好了,但是合并图片的代码个人整理了一下,代码如下:

-(UIImage *) glToUIImage {
NSInteger myDataLength = 1024 * 768 * 4;

// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 1024, 768, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <768; y++)
{
for(int x = 0; x <1024 * 4; x++)
{
buffer2[(767 - y) * 1024 * 4 + x] = buffer[y * 4 * 1024 + x];
}
}

// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);

// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 1024;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;

// make the cgimage
CGImageRef imageRef = CGImageCreate(1024, 768, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);

// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}

-(void)handleButtonAction:(id)sender{
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
}
else
{
UIGraphicsBeginImageContext(imageSize);
}

CGContextRef context = UIGraphicsGetCurrentContext();

for (UIWindow * window in [[UIApplication sharedApplication] windows]) {
if (![window respondsToSelector:@selector(screen)] || [window screen] == [UIScreen mainScreen]) {
CGContextSaveGState(context);
CGContextTranslateCTM(context, [window center].x, [window center].y);
CGContextConcatCTM(context, [window transform]);
CGContextTranslateCTM(context, -[window bounds].size.width*[[window layer] anchorPoint].x, -[window bounds].size.height*[[window layer] anchorPoint].y);
[[window layer] renderInContext:context];

CGContextRestoreGState(context);
}
}

UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

UIImage *resultImage = [self mergerImage:[self glToUIImage] secodImage:image];
UIImageWriteToSavedPhotosAlbum(resultImage, self, nil, nil);

NSLog(@"UIKit & OpenGL ES Suceeded!");

}

-(UIImage *)mergerImage:(UIImage *)firstImage secodImage:(UIImage *)secondImage{

CGSize imageSize = CGSizeMake(620, 380);
UIGraphicsBeginImageContext(imageSize);

[firstImage drawInRect:CGRectMake(0, 0, firstImage.size.width, firstImage.size.height)];
[secondImage drawInRect:CGRectMake(310 - 40, 190 - 60, secondImage.size.width, secondImage.size.height)];

UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

return resultImage;
}

最后将整合后的图片存到相册中,好了,就到这里了,继续撸码去!
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息