您好,登錄后才能下訂單哦!
本篇內容主要講解“Android不壓縮圖片如何實現(xiàn)高清加載巨圖”,感興趣的朋友不妨來看看。本文介紹的方法操作簡單快捷,實用性強。下面就讓小編來帶大家學習“Android不壓縮圖片如何實現(xiàn)高清加載巨圖”吧!
對于加載圖片,大家都不陌生,一般為了盡可能避免OOM都會按照如下做法:
對于圖片顯示:根據(jù)需要顯示圖片控件的大小對圖片進行壓縮顯示。如果圖片數(shù)量非常多:則會使用LruCache等緩存機制,將所有圖片占據(jù)的內容維持在一個范圍內。
其實對于圖片加載還有種情況,就是單個圖片非常巨大,并且還不允許壓縮。比如顯示:世界地圖、清明上河圖、微博長圖等。
那么對于這種需求,該如何做呢?
首先不壓縮,按照原圖尺寸加載,那么屏幕肯定是不夠大的,并且考慮到內存的情況,不可能一次性整圖加載到內存中,所以肯定是局部加載,那么就需要用到一個類:
BitmapRegionDecoder
其次,既然屏幕顯示不完,那么最起碼要添加一個上下左右拖動的手勢,讓用戶可以拖動查看。
那么綜上,本篇博文的目的就是去自定義一個顯示巨圖的View,支持用戶去拖動查看,大概的效果圖如下:
好吧,這清明上河圖太長了,想要觀看全圖,文末下載,圖片在assets目錄。當然如果你的圖,高度也很大,肯定也是可以上下拖動的。
BitmapRegionDecoder
主要用于顯示圖片的某一塊矩形區(qū)域,如果你需要顯示某個圖片的指定區(qū)域,那么這個類非常合適。
對于該類的用法,非常簡單,既然是顯示圖片的某一塊區(qū)域,那么至少只需要一個方法去設置圖片;一個方法傳入顯示的區(qū)域即可;詳見:
BitmapRegionDecoder提供了一系列的newInstance方法來構造對象,支持傳入文件路徑,文件描述符,文件的inputstrem等。
例如:
BitmapRegionDecoder bitmapRegionDecoder = BitmapRegionDecoder.newInstance(inputStream, false);
上述解決了傳入我們需要處理的圖片,那么接下來就是顯示指定的區(qū)域。
bitmapRegionDecoder.decodeRegion(rect, options);
參數(shù)一很明顯是一個rect,參數(shù)二是BitmapFactory.Options,你可以控制圖片的inSampleSize
,inPreferredConfig
等。
那么下面看一個超級簡單的例子:
package com.zhy.blogcodes.largeImage;import android.graphics.Bitmap;import android.graphics.BitmapFactory;import android.graphics.BitmapRegionDecoder;import android.graphics.Rect;import android.os.Bundle;import android.support.v7.app.AppCompatActivity;import android.widget.ImageView;import com.zhy.blogcodes.R;import java.io.IOException;import java.io.InputStream;public class LargeImageViewActivity extends AppCompatActivity{<!--{C}%3C!%2D%2D%20%2D%2D%3E--> private ImageView mImageView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_large_image_view); mImageView = (ImageView) findViewById(R.id.id_imageview); try { InputStream inputStream = getAssets().open("tangyan.jpg"); //獲得圖片的寬、高 BitmapFactory.Options tmpOptions = new BitmapFactory.Options(); tmpOptions.inJustDecodeBounds = true; BitmapFactory.decodeStream(inputStream, null, tmpOptions); int width = tmpOptions.outWidth; int height = tmpOptions.outHeight; //設置顯示圖片的中心區(qū)域 BitmapRegionDecoder bitmapRegionDecoder = BitmapRegionDecoder.newInstance(inputStream, false); BitmapFactory.Options options = new BitmapFactory.Options(); options.inPreferredConfig = Bitmap.Config.RGB_565; Bitmap bitmap = bitmapRegionDecoder.decodeRegion(new Rect(width / 2 - 100, height / 2 - 100, width / 2 + 100, height / 2 + 100), options); mImageView.setImageBitmap(bitmap); } catch (IOException e) { e.printStackTrace(); } }}package com.zhy.blogcodes.largeImage; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.graphics.BitmapRegionDecoder; import android.graphics.Rect; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.widget.ImageView; import com.zhy.blogcodes.R; import java.io.IOException; import java.io.InputStream; public class LargeImageViewActivity extends AppCompatActivity { private ImageView mImageView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_large_image_view); mImageView = (ImageView) findViewById(R.id.id_imageview); try { InputStream inputStream = getAssets().open("tangyan.jpg"); //獲得圖片的寬、高 BitmapFactory.Options tmpOptions = new BitmapFactory.Options(); tmpOptions.inJustDecodeBounds = true; BitmapFactory.decodeStream(inputStream, null, tmpOptions); int width = tmpOptions.outWidth; int height = tmpOptions.outHeight; //設置顯示圖片的中心區(qū)域 BitmapRegionDecoder bitmapRegionDecoder = BitmapRegionDecoder.newInstance(inputStream, false); BitmapFactory.Options options = new BitmapFactory.Options(); options.inPreferredConfig = Bitmap.Config.RGB_565; Bitmap bitmap = bitmapRegionDecoder.decodeRegion(new Rect(width / 2 - 100, height / 2 - 100, width / 2 + 100, height / 2 + 100), options); mImageView.setImageBitmap(bitmap); } catch (IOException e) { e.printStackTrace(); } } }
上述代碼,就是使用BitmapRegionDecoder去加載assets中的圖片,調用bitmapRegionDecoder.decodeRegion
解析圖片的中間矩形區(qū)域,返回bitmap,最終顯示在ImageView上。
效果圖:
上面的小圖顯示的即為下面的大圖的中間區(qū)域。
ok,那么目前我們已經了解了BitmapRegionDecoder
的基本用戶,那么往外擴散,我們需要自定義一個控件去顯示巨圖就很簡單了,首先Rect的范圍就是我們View的大小,然后根據(jù)用戶的移動手勢,不斷去更新我們的Rect的參數(shù)即可。
根據(jù)上面的分析呢,我們這個自定義控件思路就非常清晰了:
提供一個設置圖片的入口
重寫onTouchEvent,在里面根據(jù)用戶移動的手勢,去更新顯示區(qū)域的參數(shù)
每次更新區(qū)域參數(shù)后,調用invalidate,onDraw里面去regionDecoder.decodeRegion拿到bitmap,去draw
理清了,發(fā)現(xiàn)so easy,下面上代碼:
package com.zhy.blogcodes.largeImage.view; import android.content.Context; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.graphics.BitmapRegionDecoder; import android.graphics.Canvas; import android.graphics.Rect; import android.util.AttributeSet; import android.view.MotionEvent; import android.view.View; import java.io.IOException; import java.io.InputStream; /** * Created by zhy on 15/5/16. */ public class LargeImageView extends View { private BitmapRegionDecoder mDecoder; /** * 圖片的寬度和高度 */ private int mImageWidth, mImageHeight; /** * 繪制的區(qū)域 */ private volatile Rect mRect = new Rect(); private MoveGestureDetector mDetector; private static final BitmapFactory.Options options = new BitmapFactory.Options(); static { options.inPreferredConfig = Bitmap.Config.RGB_565; } public void setInputStream(InputStream is) { try { mDecoder = BitmapRegionDecoder.newInstance(is, false); BitmapFactory.Options tmpOptions = new BitmapFactory.Options(); // Grab the bounds for the scene dimensions tmpOptions.inJustDecodeBounds = true; BitmapFactory.decodeStream(is, null, tmpOptions); mImageWidth = tmpOptions.outWidth; mImageHeight = tmpOptions.outHeight; requestLayout(); invalidate(); } catch (IOException e) { e.printStackTrace(); } finally { try { if (is != null) is.close(); } catch (Exception e) { } } } public void init() { mDetector = new MoveGestureDetector(getContext(), new MoveGestureDetector.SimpleMoveGestureDetector() { @Override public boolean onMove(MoveGestureDetector detector) { int moveX = (int) detector.getMoveX(); int moveY = (int) detector.getMoveY(); if (mImageWidth > getWidth()) { mRect.offset(-moveX, 0); checkWidth(); invalidate(); } if (mImageHeight > getHeight()) { mRect.offset(0, -moveY); checkHeight(); invalidate(); } return true; } }); } private void checkWidth() { Rect rect = mRect; int imageWidth = mImageWidth; int imageHeight = mImageHeight; if (rect.right > imageWidth) { rect.right = imageWidth; rect.left = imageWidth - getWidth(); } if (rect.left < 0) { rect.left = 0; rect.right = getWidth(); } } private void checkHeight() { Rect rect = mRect; int imageWidth = mImageWidth; int imageHeight = mImageHeight; if (rect.bottom > imageHeight) { rect.bottom = imageHeight; rect.top = imageHeight - getHeight(); } if (rect.top < 0) { rect.top = 0; rect.bottom = getHeight(); } } public LargeImageView(Context context, AttributeSet attrs) { super(context, attrs); init(); } @Override public boolean onTouchEvent(MotionEvent event) { mDetector.onToucEvent(event); return true; } @Override protected void onDraw(Canvas canvas) { Bitmap bm = mDecoder.decodeRegion(mRect, options); canvas.drawBitmap(bm, 0, 0, null); } @Override protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) { super.onMeasure(widthMeasureSpec, heightMeasureSpec); int width = getMeasuredWidth(); int height = getMeasuredHeight(); int imageWidth = mImageWidth; int imageHeight = mImageHeight; //默認直接顯示圖片的中心區(qū)域,可以自己去調節(jié) mRect.left = imageWidth / 2 - width / 2; mRect.top = imageHeight / 2 - height / 2; mRect.right = mRect.left + width; mRect.bottom = mRect.top + height; } }
根據(jù)上述源碼:
setInputStream里面去獲得圖片的真實的寬度和高度,以及初始化我們的mDecoder
onMeasure里面為我們的顯示區(qū)域的rect賦值,大小為view的尺寸
onTouchEvent里面我們監(jiān)聽move的手勢,在監(jiān)聽的回調里面去改變rect的參數(shù),以及做邊界檢查,最后invalidate
在onDraw里面就是根據(jù)rect拿到bitmap,然后draw了
ok,上面并不復雜,不過大家有沒有注意到,這個監(jiān)聽用戶move手勢的代碼寫的有點奇怪,恩,這里模仿了系統(tǒng)的ScaleGestureDetector
,編寫了MoveGestureDetector
,代碼如下:
MoveGestureDetector
package com.zhy.blogcodes.largeImage.view; import android.content.Context; import android.graphics.PointF; import android.view.MotionEvent; public class MoveGestureDetector extends BaseGestureDetector { private PointF mCurrentPointer; private PointF mPrePointer; //僅僅為了減少創(chuàng)建內存 private PointF mDeltaPointer = new PointF(); //用于記錄最終結果,并返回 private PointF mExtenalPointer = new PointF(); private OnMoveGestureListener mListenter; public MoveGestureDetector(Context context, OnMoveGestureListener listener) { super(context); mListenter = listener; } @Override protected void handleInProgressEvent(MotionEvent event) { int actionCode = event.getAction() & MotionEvent.ACTION_MASK; switch (actionCode) { case MotionEvent.ACTION_CANCEL: case MotionEvent.ACTION_UP: mListenter.onMoveEnd(this); resetState(); break; case MotionEvent.ACTION_MOVE: updateStateByEvent(event); boolean update = mListenter.onMove(this); if (update) { mPreMotionEvent.recycle(); mPreMotionEvent = MotionEvent.obtain(event); } break; } } @Override protected void handleStartProgressEvent(MotionEvent event) { int actionCode = event.getAction() & MotionEvent.ACTION_MASK; switch (actionCode) { case MotionEvent.ACTION_DOWN: resetState();//防止沒有接收到CANCEL or UP ,保險起見 mPreMotionEvent = MotionEvent.obtain(event); updateStateByEvent(event); break; case MotionEvent.ACTION_MOVE: mGestureInProgress = mListenter.onMoveBegin(this); break; } } protected void updateStateByEvent(MotionEvent event) { final MotionEvent prev = mPreMotionEvent; mPrePointer = caculateFocalPointer(prev); mCurrentPointer = caculateFocalPointer(event); //Log.e("TAG", mPrePointer.toString() + " , " + mCurrentPointer); boolean mSkipThisMoveEvent = prev.getPointerCount() != event.getPointerCount(); //Log.e("TAG", "mSkipThisMoveEvent = " + mSkipThisMoveEvent); mExtenalPointer.x = mSkipThisMoveEvent ? 0 : mCurrentPointer.x - mPrePointer.x; mExtenalPointer.y = mSkipThisMoveEvent ? 0 : mCurrentPointer.y - mPrePointer.y; } /** * 根據(jù)event計算多指中心點 * * @param event * @return */ private PointF caculateFocalPointer(MotionEvent event) { final int count = event.getPointerCount(); float x = 0, y = 0; for (int i = 0; i < count; i++) { x += event.getX(i); y += event.getY(i); } x /= count; y /= count; return new PointF(x, y); } public float getMoveX() { return mExtenalPointer.x; } public float getMoveY() { return mExtenalPointer.y; } public interface OnMoveGestureListener { public boolean onMoveBegin(MoveGestureDetector detector); public boolean onMove(MoveGestureDetector detector); public void onMoveEnd(MoveGestureDetector detector); } public static class SimpleMoveGestureDetector implements OnMoveGestureListener { @Override public boolean onMoveBegin(MoveGestureDetector detector) { return true; } @Override public boolean onMove(MoveGestureDetector detector) { return false; } @Override public void onMoveEnd(MoveGestureDetector detector) { } } }
BaseGestureDetector
package com.zhy.blogcodes.largeImage.view; import android.content.Context; import android.view.MotionEvent; public abstract class BaseGestureDetector { protected boolean mGestureInProgress; protected MotionEvent mPreMotionEvent; protected MotionEvent mCurrentMotionEvent; protected Context mContext; public BaseGestureDetector(Context context) { mContext = context; } public boolean onToucEvent(MotionEvent event) { if (!mGestureInProgress) { handleStartProgressEvent(event); } else { handleInProgressEvent(event); } return true; } protected abstract void handleInProgressEvent(MotionEvent event); protected abstract void handleStartProgressEvent(MotionEvent event); protected abstract void updateStateByEvent(MotionEvent event); protected void resetState() { if (mPreMotionEvent != null) { mPreMotionEvent.recycle(); mPreMotionEvent = null; } if (mCurrentMotionEvent != null) { mCurrentMotionEvent.recycle(); mCurrentMotionEvent = null; } mGestureInProgress = false; } }
你可能會說,一個move手勢搞這么多代碼,太麻煩了。的確是的,move手勢的檢測非常簡單,那么之所以這么寫呢,主要是為了可以復用,比如現(xiàn)在有一堆的XXXGestureDetector
,當我們需要監(jiān)聽什么手勢,就直接拿個detector來檢測多方便。我相信大家肯定也郁悶過Google,為什么只有ScaleGestureDetector
而沒有RotateGestureDetector
呢。
根據(jù)上述,大家應該理解了為什么要這么做,當時不強制,每個人都有個性。
測試其實沒撒好說的了,就是把我們的LargeImageView放入布局文件,然后Activity里面去設置inputstream了。
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent"> <com.zhy.blogcodes.largeImage.view.LargeImageView android:id="@+id/id_largetImageview" android:layout_width="match_parent" android:layout_height="match_parent"/> </RelativeLayout>
然后在Activity里面去設置圖片:
package com.zhy.blogcodes.largeImage; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import com.zhy.blogcodes.R; import com.zhy.blogcodes.largeImage.view.LargeImageView; import java.io.IOException; import java.io.InputStream; public class LargeImageViewActivity extends AppCompatActivity { private LargeImageView mLargeImageView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_large_image_view); mLargeImageView = (LargeImageView) findViewById(R.id.id_largetImageview); try { InputStream inputStream = getAssets().open("world.jpg"); mLargeImageView.setInputStream(inputStream); } catch (IOException e) { e.printStackTrace(); } } }
效果圖:
到此,相信大家對“Android不壓縮圖片如何實現(xiàn)高清加載巨圖”有了更深的了解,不妨來實際操作一番吧!這里是億速云網站,更多相關內容可以進入相關頻道進行查詢,關注我們,繼續(xù)學習!
免責聲明:本站發(fā)布的內容(圖片、視頻和文字)以原創(chuàng)、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關證據(jù),一經查實,將立刻刪除涉嫌侵權內容。