捕捉音频捕捉软件的时候怎么从 CMSampleBufferRef 中提取数据

主题 : 如何把CMSampleBufferRef 转化为 NSData或者UIimage
级别: 侠客
可可豆: 313 CB
威望: 314 点
在线时间: 369(时)
发自: Web Page
如何把CMSampleBufferRef 转化为 NSData或者UIimage&&&
技术问题发到问答:我知道,我仍旧要在论坛继续发布问题
如何把CMSampleBufferRef 转化为 NSData或者UIimage// 通过抽样缓存数据创建一个UIImage对象- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer{&&&&// Get a CMSampleBuffer's Core Video image buffer for the media data&&&&CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);&&&&// Lock the base address of the pixel buffer&&&&CVPixelBufferLockBaseAddress(imageBuffer, 0);&&&&&&&&// Get the number of bytes per row for the pixel buffer&&&&void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);&&&&&&&&// Get the number of bytes per row for the pixel buffer&&&&size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);&&&&// Get the pixel buffer width and height&&&&size_t width = CVPixelBufferGetWidth(imageBuffer);&&&&size_t height = CVPixelBufferGetHeight(imageBuffer);&&&&&&&&// Create a device-dependent RGB color space&&&&CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();&&&&&&&&// Create a bitmap graphics context with the sample buffer data&&&&CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);&&&&// Create a Quartz image from the pixel data in the bitmap graphics context&&&&CGImageRef quartzImage = CGBitmapContextCreateImage(context);&&&&// Unlock the pixel buffer&&&&CVPixelBufferUnlockBaseAddress(imageBuffer,0);&&&&&&&&// Free up the context and color space&&&&CGContextRelease(context);&&&&CGColorSpaceRelease(colorSpace);&&&&&&&&// Create an image object from the Quartz image&&&&//UIImage *image = [UIImage imageWithCGImage:quartzImage];&&&&UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0f orientation:UIImageOrientationRight];&&&&&&&&// Release the Quartz image&&&&CGImageRelease(quartzImage);&&&&&&&&&&&&}总是提示CGBitmapContextCreateImage: invalid context 0x0没办法导出图片。。。有遇到这种情况的么?
级别: 新手上路
UID: 162829
可可豆: 8 CB
威望: 7 点
在线时间: 62(时)
发自: Web Page
楼主你的这个问题解决了我吗。。我也遇到了这个头疼的问题。。求解决方法。。。
级别: 侠客
可可豆: 313 CB
威望: 314 点
在线时间: 369(时)
发自: Web Page
回 1楼(sonker) 的帖子
没有。。。叫服务器去修改了。。。。
级别: 侠客
UID: 65807
可可豆: 388 CB
威望: 334 点
在线时间: 435(时)
发自: Web Page
是格式不正确吧?传过来的不是RGB格式
关注本帖(如果有新回复会站内信通知您)
9*6-8 正确答案:46
发帖、回帖都会得到可观的积分奖励。
按"Ctrl+Enter"直接提交
关注CocoaChina
关注微信 每日推荐
扫一扫 浏览移动版How to create AudioBuffer/Audio from NSdata [如何创建从NSData audiobuffer /音频] - 问题-字节技术
How to create AudioBuffer/Audio from NSdata
如何创建从NSData audiobuffer /音频
问题 (Question)
I am a beginner in streaming application, I created NSdata from AudioBuffer and i am sending the nsdata to client(receiver). But i don't know how to convert NSdata to Audio Buffer.
I am using the following code to convert AudioBuffer to NSdata (This is working good)
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
AudioStreamBasicDescription audioF
memset(&audioFormat, 0, sizeof(audioFormat));
audioFormat.mSampleRate = 8000.0;
audioFormat.mFormatID = kAudioFormatiLBC;
audioFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked | kAudioFormatFlagIsAlignedH
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mReserved = 0;
audioFormat.mBytesPerFrame = audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame* sizeof(SInt16);
AudioBufferList audioBufferL
NSMutableData *data=[[NSMutableData alloc] init];
CMBlockBufferRef blockB
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for( int y=0; y&audioBufferList.mNumberB y++ )
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
Float32 *frame = (Float32*)audioBuffer.mD
[data appendBytes:frame length:audioBuffer.mDataByteSize];
If this is not the proper way then please help me.... thanks.
我在流媒体应用一个初学者,我创建了audiobuffer NSData从我发出的NSData客户端(接收器)。但我不知道如何将NSData音频缓冲区。我用下面的代码转换为audiobuffer NSData(这是良好的)- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
AudioStreamBasicDescription audioF
memset(&audioFormat, 0, sizeof(audioFormat));
audioFormat.mSampleRate = 8000.0;
audioFormat.mFormatID = kAudioFormatiLBC;
audioFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked | kAudioFormatFlagIsAlignedH
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mReserved = 0;
audioFormat.mBytesPerFrame = audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame* sizeof(SInt16);
AudioBufferList audioBufferL
NSMutableData *data=[[NSMutableData alloc] init];
CMBlockBufferRef blockB
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for( int y=0; y&audioBufferList.mNumberB y++ )
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
Float32 *frame = (Float32*)audioBuffer.mD
[data appendBytes:frame length:audioBuffer.mDataByteSize];
如果这不是正确的方式请帮帮我…感谢
最佳答案 (Best Answer)
This is the code I have used to convert my audio data (audio file ) into floating point representation and saved into an array.firstly I get the audio data into AudioBufferList and then get the float value of the audio data. Check the below code if it help
-(void) PrintFloatDataFromAudioFile {
NSString *
name = @"Filename";
//YOUR FILE NAME
NSString * source = [[NSBundle mainBundle] pathForResource:name ofType:@"m4a"]; // SPECIFY YOUR FILE FORMAT
const char *cString = [source cStringUsingEncoding:NSASCIIStringEncoding];
CFStringRef str = CFStringCreateWithCString(
kCFStringEncodingMacRoman
CFURLRef inputFileURL = CFURLCreateWithFileSystemPath(
kCFAllocatorDefault,
kCFURLPOSIXPathStyle,
ExtAudioFileRef fileR
ExtAudioFileOpenURL(inputFileURL, &fileRef);
AudioStreamBasicDescription audioF
audioFormat.mSampleRate = 44100;
// GIVE YOUR SAMPLING RATE
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kLinearPCMFormatFlagIsF
audioFormat.mBitsPerChannel = sizeof(Float32) * 8;
audioFormat.mChannelsPerFrame = 1; // Mono
audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame * sizeof(Float32);
// == sizeof(Float32)
audioFormat.mFramesPerPacket = 1;
audioFormat.mBytesPerPacket = audioFormat.mFramesPerPacket * audioFormat.mBytesPerF // = sizeof(Float32)
// 3) Apply audio format to the Extended Audio File
ExtAudioFileSetProperty(
kExtAudioFileProperty_ClientDataFormat,
sizeof (AudioStreamBasicDescription), //= audioFormat
&audioFormat);
int numSamples = 1024; //How many samples to read in at a time
UInt32 sizePerPacket = audioFormat.mBytesPerP // = sizeof(Float32) = 32bytes
UInt32 packetsPerBuffer = numS
UInt32 outputBufferSize = packetsPerBuffer * sizePerP
// So the lvalue of outputBuffer is the memory location where we have reserved space
UInt8 *outputBuffer = (UInt8 *)malloc(sizeof(UInt8 *) * outputBufferSize);
AudioBufferList convertedD//= malloc(sizeof(convertedData));
convertedData.mNumberBuffers = 1;
// Set this to 1 for mono
convertedData.mBuffers[0].mNumberChannels = audioFormat.mChannelsPerF
//also = 1
convertedData.mBuffers[0].mDataByteSize = outputBufferS
convertedData.mBuffers[0].mData = outputB //
UInt32 frameCount = numS
float *samplesAsCA
double floatDataArray[882000] // SPECIFY YOUR DATA LIMIT MINE WAS 882000 , SHOULD BE EQUAL TO OR MORE THAN DATA LIMIT
while (frameCount & 0) {
ExtAudioFileRead(
&frameCount,
&convertedData
if (frameCount & 0)
AudioBuffer audioBuffer = convertedData.mBuffers[0];
samplesAsCArray = (float *)audioBuffer.mD // CAST YOUR mData INTO FLOAT
for (int i =0; i&1024 /*numSamples */; i++) { //YOU CAN PUT numSamples INTEAD OF 1024
floatDataArray[j] = (double)samplesAsCArray[i] ; //PUT YOUR DATA INTO FLOAT ARRAY
printf("\n%f",floatDataArray[j]);
//PRINT YOUR ARRAY'S DATA IN FLOAT FORM RANGING -1 TO +1
这是我用来转换我的音频数据的代码(音频文件)为浮点数表示法和保存在array.firstly我得到的音频数据嵌入AudioBufferList然后把音频数据的浮点值。检查下面的代码,如果它帮助-(void) PrintFloatDataFromAudioFile {
NSString *
name = @"Filename";
//YOUR FILE NAME
NSString * source = [[NSBundle mainBundle] pathForResource:name ofType:@"m4a"]; // SPECIFY YOUR FILE FORMAT
const char *cString = [source cStringUsingEncoding:NSASCIIStringEncoding];
CFStringRef str = CFStringCreateWithCString(
kCFStringEncodingMacRoman
CFURLRef inputFileURL = CFURLCreateWithFileSystemPath(
kCFAllocatorDefault,
kCFURLPOSIXPathStyle,
ExtAudioFileRef fileR
ExtAudioFileOpenURL(inputFileURL, &fileRef);
AudioStreamBasicDescription audioF
audioFormat.mSampleRate = 44100;
// GIVE YOUR SAMPLING RATE
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kLinearPCMFormatFlagIsF
audioFormat.mBitsPerChannel = sizeof(Float32) * 8;
audioFormat.mChannelsPerFrame = 1; // Mono
audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame * sizeof(Float32);
// == sizeof(Float32)
audioFormat.mFramesPerPacket = 1;
audioFormat.mBytesPerPacket = audioFormat.mFramesPerPacket * audioFormat.mBytesPerF // = sizeof(Float32)
// 3) Apply audio format to the Extended Audio File
ExtAudioFileSetProperty(
kExtAudioFileProperty_ClientDataFormat,
sizeof (AudioStreamBasicDescription), //= audioFormat
&audioFormat);
int numSamples = 1024; //How many samples to read in at a time
UInt32 sizePerPacket = audioFormat.mBytesPerP // = sizeof(Float32) = 32bytes
UInt32 packetsPerBuffer = numS
UInt32 outputBufferSize = packetsPerBuffer * sizePerP
// So the lvalue of outputBuffer is the memory location where we have reserved space
UInt8 *outputBuffer = (UInt8 *)malloc(sizeof(UInt8 *) * outputBufferSize);
AudioBufferList convertedD//= malloc(sizeof(convertedData));
convertedData.mNumberBuffers = 1;
// Set this to 1 for mono
convertedData.mBuffers[0].mNumberChannels = audioFormat.mChannelsPerF
//also = 1
convertedData.mBuffers[0].mDataByteSize = outputBufferS
convertedData.mBuffers[0].mData = outputB //
UInt32 frameCount = numS
float *samplesAsCA
double floatDataArray[882000] // SPECIFY YOUR DATA LIMIT MINE WAS 882000 , SHOULD BE EQUAL TO OR MORE THAN DATA LIMIT
while (frameCount & 0) {
ExtAudioFileRead(
&frameCount,
&convertedData
if (frameCount & 0)
AudioBuffer audioBuffer = convertedData.mBuffers[0];
samplesAsCArray = (float *)audioBuffer.mD // CAST YOUR mData INTO FLOAT
for (int i =0; i&1024 /*numSamples */; i++) { //YOU CAN PUT numSamples INTEAD OF 1024
floatDataArray[j] = (double)samplesAsCArray[i] ; //PUT YOUR DATA INTO FLOAT ARRAY
printf("\n%f",floatDataArray[j]);
//PRINT YOUR ARRAY'S DATA IN FLOAT FORM RANGING -1 TO +1
本文翻译自StackoverFlow,英语好的童鞋可直接参考原文:iphone开发:读取音频
核心提要:  此处来源于网络,没有测试:  已经解决,有两种方法,第一种是把文件转存到程序的目录下,转存的格式为.caf文件,使用的接口为AVAssetReader,AVAssetWriter,AVAssetReaderAud...
  此处来源于网络,没有测试:    已经解决,有两种方法,第一种是把文件转存到程序的目录下,转存的格式为.caf文件,使用的接口为AVAssetReader,AVAssetWriter,AVAssetReaderAudioMixOutput,AVAssetWriterInput;代码可以参考/blog//from-ipod-library-to-pcm-samples-in-far-fewer-steps-than-were-previously-necessary/ 上面说的很清楚了,而且代码也已经给出;  但这种方法缺点是速度慢,而且每读一个音频文件,都会在程序文件夹下生成一个.caf文件(.caf 文件大小约是.mp3的十倍)  第二种方法是直接把文件内容分块读入内存,主要用于音频解析:  //传入参数就是获取到的MPMediaItem的AssetURL;  (void)loadToMemory:(NSURL*)asset_url  {  NSError *reader_error=  AVURLAsset *item_choosed_asset=[AVURLAsset URLAssetWithURL:asset_url opti*****:nil];  AVAssetReader *item_reader=[AVAssetReader assetReaderWithAsset:item_choosed_asset error:&reader_error];  if (reader_error) {  NSLog(@&failed to creat asset reader,reason:%@&,[reader_error description]);    }  NSArray *asset_tracks=[item_choosed_asset tracks];  AVAssetReaderAudioMixOutput *item_reader_output=[AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:asset_tracks audioSettings:nil];  if ([item_reader canAddOutput:item_reader_output]) {  [item_reader addOutput:item_reader_output];  }else {  NSLog(@&the reader can not add the output&);  }  UInt64 total_converted_  UInt64 converted_  UInt64 converted_sample_  size_t sample_  short* data_buffer=  CMBlockBufferRef next_buffer_data=  [item_reader startReading];  while (item_reader.status==AVAssetReaderStatusReading) {  CMSampleBufferRef next_buffer=[item_reader_output copyNextSampleBuffer];  if (next_buffer) {  total_converted_bytes=CMSampleBufferGetTotalSampleSize(next_buffer);//next_buffer的总字节数;  sample_size=CMSampleBufferGetSampleSize(next_buffer, 0);//next_buffer中序号为0的sample的大小;  converted_sample_num=CMSampleBufferGetNumSamples(next_buffer);//next_buffer中所含sample的总个数;  NSLog(@&the number of samples is %f&,(float)converted_sample_num);  NSLog(@&the size of the sample is %f&,(float)sample_size);  NSLog(@&the size of the whole buffer is %f&,(float)total_converted_bytes);  //copy the data to the data_  //这种方法中,我们每获得一次nextSampleBuffer后就对其进行解析,而不是把文件全部载入内存后再进行解析;  //AVAssetReaderOutput 的copyNextSampleBuffer方法每次读取8196个sample的数据(最后一次除外),这些数据是以short型存放在内存中(两字节为一单元)  //每个sample的大小和音频的声道数相关,可以用CMSampleBufferGetSampleSize来获得,所以每次调用copyNextSampleBuffer后所获得的数据大小为8196*sample_size(byte);  //据此,我们申请data_buffer时每次需要的空间也是固定的,为(8196*sample_size)/2个short型内存(每个short占两字节);  if (!data_buffer) {  data_buffer= new short[4096*sample_size];  }  next_buffer_data=CMSampleBufferGetDataBuffer(next_buffer);  OSStatus buffer_status=CMBlockBufferCopyDataBytes(next_buffer_data, 0, total_converted_bytes, data_buffer);  if (buffer_status!=kCMBlockBufferNoErr) {  NSLog(@&something wrong happened when copying data bytes&);  }  /*  此时音频的数据存储在data_buffer中,这些数据是音频原始数据(未经任何压缩),可以对其进行解析或其它操作  */  }else {  NSLog(@&total sameple size %d&, converted_count);  size_t total_data_length=CMBlockBufferGetDataLength(item_buffer);  NSLog(@&item buffer length is %f&,(float)total_data_length);    }  //CFRelease(next_buffer);  }  if (item_reader.status==AVAssetReaderStatusCompleted) {  NSLog(@&read over......&);  }else {  NSLog(@&&);  }  }    第二种,个人应用测试,可以使用。  MPMediaPickerController *pickerController = [[MPMediaPickerController alloc]  initWithMediaTypes: MPMediaTypeMusic];  //pickerController.prompt = @&Choose song&;  pickerController.allowsPickingMultipleItems = NO;  pickerController.delegate =  [self presentModalViewController:pickerController animated:YES];  [pickerController release];    //将读取出来的数据写入沙盒,然后再用路径得到nsdata。    - (void)mediaPicker:(MPMediaPickerController *)mediaPicker didPickMediaItems:(MPMediaItemCollection *)mediaItemCollection  {  NSArray *media_array = [mediaItemCollection items];  MPMediaItem *song_item = [media_array objectAtIndex:0];  SongObject *song_object = [[SongObject alloc] init];  [song_object setSong_name:[song_item valueForProperty: MPMediaItemPropertyTitle]];  [song_object setSinger_name:[song_item valueForKey:MPMediaItemPropertyPodcastTitle]];  NSURL *url = [song_item valueForProperty:MPMediaItemPropertyAssetURL];  NSLog(@&url is %@&,url);   AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:url options:nil];  AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset: songAsset presetName: AVAssetExportPresetAppleM4A];  exporter.outputFileType = @&com.apple.m4a-audio&;  NSString *exportFile = [myDocumentsDirectory() stringByAppendingPathComponent: @&exported.m4a&];  if ([[NSFileManager defaultManager] fileExistsAtPath:exportFile])  {  NSError *deleteErr =  [[NSFileManager defaultManager] removeItemAtPath:exportFile error:&deleteErr];  if (deleteErr)  {  NSLog (@&Can't delete %@: %@&, exportFile, deleteErr);  }  }  //[songAsset loadValuesAsynchronouslyForKeys:&#(NSArray *)#& completionHandler:&#^(void)handler#&]  NSURL *path_url = [NSURL fileURLWithPath:exportFile];  exporter.outputURL = path_  [exporter exportAsynchronouslyWithCompletionHandler:^{  int exportStatus = exporter.  switch (exportStatus)  {  case AVAssetExportSessionStatusFailed:  {  // log error to text view  NSError *exportError = exporter.  NSLog (@&AVAssetExportSessionStatusFailed: %@&, exportError);    }  case AVAssetExportSessionStatusCompleted:  {  NSLog (@&AVAssetExportSessionStatusCompleted&);  // set up AVPlayer  NSData *data = [NSData dataWithContentsOfURL:path_url];  NSLog(@&data is %@&,data);    }  case AVAssetExportSessionStatusUnknown:  {  NSLog (@&AVAssetExportSessionStatusUnknown&);    }  case AVAssetExportSessionStatusExporting:  {  NSLog (@&AVAssetExportSessionStatusExporting&);    }  case AVAssetExportSessionStatusCancelled:  {  NSLog (@&AVAssetExportSessionStatusCancelled&);    }  case AVAssetExportSessionStatusWaiting:  {  NSLog (@&AVAssetExportSessionStatusWaiting&);    }  default:  {  NSLog (@&didn't get export status&);    }  }  }];  [song_object release];  [mediaPicker dismissModalViewControllerAnimated:YES];  }    摘自 云怀空-abel
您对本文章有什么意见或着疑问吗?请到您的关注和建议是我们前行的参考和动力&&查看: 6632|回复: 15
如何用公式从数据透视表中提取数据(问题更新,急求)
阅读权限20
在线时间 小时
非常感谢版主的帮助, 我还是第一次应用公式从数据透视表中提取数据. 还有几问题麻烦各位帮我看一看:1.当数据透视表中无数据时, 单元格显示"#REF", 我想让其显示为空格, 如何设置公式?试了ISNONTEXT不行.2.如何设置公式,当此列数据("透视结果""出现2次或2次以上时,只计数一次如30:30(不在数据透视表区域)所示,如何在表单中(丝印B VI Daily Report)D7与D8行引用其结果3.如何将数据透视表(DEFECT)中的POSITION NO 和 DEFECTS 透视到对应的时间段里, 如D12,J12所示.4.如何在数据透视表("DEFECT")里显示DEFECT数量而不是个数,如D9显示为C2值,而不是个数1.因为马上要应用,拜托各位了!
(13.76 KB, 下载次数: 46)
10:40 上传
点击文件名下载附件
如何用公式从数据透视表中提取数据(问题更新,急求)
[此贴子已经被作者于 10:40:49编辑过]
22:27 上传
点击文件名下载附件
11.04 KB, 下载次数: 33
如何用公式从数据透视表中提取数据(急求)
阅读权限20
在线时间 小时
拜托各位帮帮忙了!Thanks!
阅读权限20
在线时间 小时
以下是引用czzqb在 12:48:50的发言:第一个问题:1.当数据透视表中无数据时, 单元格显示"#REF", 我想让其显示为空格, 如何设置公式?试了ISNONTEXT不行.ISNOTEXT是检查是不是文本,而对错误值不起作用。要用ISERROR()办法之一:选定这两列(D7:Z8),光标定位D7,条件格式=ISERROR(D7),字体颜色选择于底色相同办法之二:定义名称Z=GETPIVOTDATA("Results",透视结果!$A$4,"Results",'丝印B VI Daily Report'!$A7,"Input Time",'丝印B VI Daily Report'!D$1)D7=IF(ISERROR(z),"",z),右拉,下拉第三个问题没看懂。“对应时间段”是哪里?比如iu03怎么会跑到7里面?其他两个,不会非常感谢版主在第一时间回答我的回题. 关于第三题, 我举一个例子1.数据透视表"DEFECT"中,D01共有三个POSTION(se43,iu03,oi89)以及DEFECTS数(1,1,2)分别出现在INPUT TIME中(7,7,1).2.需要将POSITION连同DEFECTS透视到表"丝印B VI Daily Report"中D01所在行对应的"TIME"下单元格,结果如D12,J12所示.麻烦各位兄弟继续帮忙解答! 多谢!
阅读权限20
在线时间 小时
以下是引用czzqb在 20:40:29的发言:变成这样行不行?非常感谢版主的帮助,我决定按你的建议,重新修改表单的格式.同时,再次感谢你的帮助,让我受益匪浅!
阅读权限20
在线时间 小时
版主,我按你的方法设了公式, 看起来一样,但不知道为什么发挥不了作用.不知道错在哪里了!可以帮我检查一下吗? 再次感谢!
(50.85 KB, 下载次数: 19)
10:08 上传
点击文件名下载附件
如何用公式从数据透视表中提取数据(问题更新,急求)
阅读权限20
在线时间 小时
公式下拉错误
以下是引用czzqb在 12:20:19的发言:DEFACT透视表里的分项TOTAL哪里去了?你原来的表里有这一项,我要用这一项作为该项结束的标识。没有"D01 TOTAL"就找不到D01的结束行号。另外,名称ZZ公式里的B7是绝对引用。你变成了相对引用。改过来。当然,没有分项TOTAL,也能做,但要更改公式谢谢版主,我将公式改过来了.现在针对当前单元格可以了,但如果下拉就不行.会出现显示不全或根本不显示的问题, 需要重新设置公式吗?谢谢!
(15.25 KB, 下载次数: 6)
13:43 上传
点击文件名下载附件
公式下拉错误
[此贴子已经被作者于 13:43:42编辑过]
阅读权限20
在线时间 小时
两种方法对比
以下是引用czzqb在 12:41:12的发言:光标定位D62更改两个定义名称:ZZ=OFFSET(DEFECT!$B$7,MATCH('丝印B VI Daily Report'!$B62,DEFECT!$A:$A,)-7,,MATCH("*",INDEX(DEFECT!$A:$A,H+1):DEFECT!$A$9999,))ZZZ=GETPIVOTDATA("Defects",DEFECT!$A$5,"Defect Code",'丝印B VI Daily Report'!$B62,"Position No",ZZ,"Input Time",'丝印B VI Daily Report'!D$51)增加一个名称:H=MATCH('丝印B VI Daily Report'!$B62,DEFECT!$A:$A,)这里的9999是假定你的透视表最大行号不超过9999,可根据你的实际情况更改(不要太大)使用这几个名称,透视表就可以不用那个分项TOTAL了(就像你最后一个附件这样)真的很抱谦,还得麻烦你, 我试了两种方法: 第一种不能下拉,下拉后显示不全 第二种不能显示. 不知道是不是我操作上的失误! 请版主再帮我看一下.同时, 也深深感谢你花了这么多时间给我纠正.
(15.59 KB, 下载次数: 4)
17:42 上传
点击文件名下载附件
如何用公式从数据透视表中提取数据(问题更新,急求)
阅读权限20
在线时间 小时
数据重复出现
又长见识了.这次彻彻底底的检查了一遍.也对所有的单元格都设了公式,相应的位置号都对出现在对应的单元格, 但有些重复出现,就不知道怎么回事了!版主有空的时候麻烦再帮我看看. 对于版主这么耐心的指导,我的EXCEL技能也在一天天提高,真的很感谢,也很感动.
(42.61 KB, 下载次数: 17)
08:51 上传
点击文件名下载附件
如何用公式从数据透视表中提取数据(问题更新,急求)
阅读权限20
在线时间 小时
以下是引用czzqb在 12:59:44的发言:不好意思,前面疏忽了这种情况。改一下:1,原先的名称ZZZ改为ZZ0,内容不变2,ZZZ的定义公式改为:=IF(ROW($1:$5)&ROWS(ZZ0),,ZZ0)大功告成! 谢谢版主帮助我走过了这个艰难的过程, 对于透视表的应用,数组公式,屏蔽错误值都有了全新的认识.谢谢, 非常感谢!
阅读权限95
在线时间 小时
透视表函数
&&&&&&&&&&&&&&&&&&&&&&&&&&&& .&& &&&&&&&&&&&&&&
玩命加载中,请稍候
玩命加载中,请稍候
Powered by
本论坛言论纯属发表者个人意见,任何违反国家相关法律的言论,本站将协助国家相关部门追究发言者责任! & & 本站特聘法律顾问:徐怀玉律师 李志群律师

我要回帖

更多关于 网页音频捕捉 的文章

 

随机推荐