Search Projects Here

22 Results Found

Modification Of Canny Edge Detection For Coral Reef Components Estimation Distribution From Underwater Video Transect

In Recent Years, Monitoring Of Coral Reef Status And Health Are Done With The Assist From Image Processing Technique. Since Underwater Images Are Always Suffer From Major Drawbacks, Research In This Area Is Still Active. In This Paper, We Propose To Use Edge Based Segmentation Where We Modify The Original Canny Edge Detector And Then Use The Blob Processing Technique To Extract Dominant Features From The Images. We Conduct The Experiments Using Images That Are Extracted From Video Transect And The Results Are Promising For Estimating Coral Reefs Distribution.

Noniterative Interpolation-Based Super-Resolution Minimizing Aliasing In The Reconstructed Image

Super-resolution (SR) Techniques Produce A High-resolution Image From A Set Of Low-resolution Under Sampled Images. In This Paper, We Propose A New Method For Super-resolution That Uses Sampling Theory Concepts To Derive A Noniterative SR Algorithm. We First Raise The Issue Of The Validity Of The Data Model Usually Assumed In SR, Pointing Out That It Imposes A Band-limited Reconstructed Image Plus A Certain Type Of Noise. We Propose A Sampling Theory Framework With A Prefiltering Step That Allows Us To Work With More General Data Models And Also A Specific New Method For SR That Uses Delaunay Triangulation And B-splines To Build The Super-resolved Image. The Proposed Method Is Noniterative And Well Posed. We Prove Its Effectiveness Against Traditional Iterative And Noniterative SR Methods On Synthetic And Real Data. Additionally, We Also Prove That We Can First Solve The Interpolation Problem And Then Make The Deblurring Not Only When The Motion Is Translational But Also When There Are Rotations And Shifts And The Imaging System Point Spread Function (PSF) Is Rotationally Symmetric.

Robust And Effective Component-Based Banknote Recognition For The Blind

We Develop A Novel Camera-based Computer Vision Technology To Automatically Recognize Banknotes To Assist Visually Impaired People. Our Banknote Recognition System Is Robust And Effective With The Following Features: 1) High Accuracy: High True Recognition Rate And Low False Recognition Rate; 2) Robustness: Handles A Variety Of Currency Designs And Bills In Various Conditions; 3) High Efficiency: Recognizes Banknotes Quickly; And 4) Ease Of Use: Helps Blind Users To Aim The Target For Image Capture. To Make The System Robust To A Variety Of Conditions Including Occlusion, Rotation, Scaling, Cluttered Background, Illumination Change, Viewpoint Variation, And Worn Or Wrinkled Bills, We Propose A Component-based Framework By Using Speeded Up Robust Features (SURF). Furthermore, We Employ The Spatial Relationship Of Matched SURF Features To Detect If There Is A Bill In The Camera View. This Process Largely Alleviates False Recognition And Can Guide The User To Correctly Aim At The Bill To Be Recognized. The Robustness And Generalizability Of The Proposed System Are Evaluated On A Dataset Including Both Positive Images (with U.S. Banknotes) And Negative Images (no U.S. Banknotes) Collected Under A Variety Of Conditions. The Proposed Algorithm Achieves 100% True Recognition Rate And 0% False Recognition Rate. Our Banknote Recognition System Is Also Tested By Blind Users

Crop Type Classification By Simultaneous Use Of Satellite Images Of Different Resolutions

Accurate And Timely Identification Of Crop Types Has Significant Economic, Agricultural, Policy, And Environmental Applications. The Existing Remote Sensing Methods To Identify Crop Types Rely On Remotely Sensed Images Of High Temporal Frequency In Order To Utilize Phenological Changes In Crop Reflectance Characteristics. However, These Image Sets Generally Have Relatively Low Spatial Resolution. This Tradeoff Makes It Difficult To Classify Remotely Sensed Images In Fragmented Landscapes Where Field Sizes Are Smaller Than The Resolution Of Imaging Sensor. Here, We Develop A Method For Combining High Spatial Resolution (high-resolution) Data With Images With Low Spatial Resolution But With High Time Frequency To Achieve A Superior Classification Of Crop Types. The Solution Is Implemented And Tested On Both Synthetic And Real Data Sets As A Proof Of Concept. We Show That, By Incorporating High-temporal-frequency But Low Spatial Resolution Data Into The Classification Process, Up To 20% Of Improvement In Classification Accuracy Can Be Achieved Even If Very Few High-resolution Images Are Available For A Location. This Boost In Accuracy Is Roughly Equivalent To Including An Additional High-resolution Image To The Temporal Stack During The Classification Process. The Limitations Of The Current Algorithm Include Computational Performance And The Need For Ideal Crop Curves. Nevertheless, The Resulting Boost In Accuracy Can Help Researchers Create Superior Crop Type Classification Maps, Thereby Creating The Opportunity To Make More Informed Decisions.

Detection And Inpainting Of Facial Wrinkles Using Texture Orientation Fields And Markov Random Field Modeling

Facial Retouching Is Widely Used In Media And Entertainment Industry. Professional Software Usually Requires A Minimum Level Of User Expertise To Achieve The Desirable Results. In This Paper, We Present An Algorithm To Detect Facial Wrinkles/imperfection. We Believe That Any Such Algorithm Would Be Amenable To Facial Retouching Applications. The Detection Of Wrinkles/imperfections Can Allow These Skin Features To Be Processed Differently Than The Surrounding Skin Without Much User Interaction. For Detection, Gabor Filter Responses Along With Texture Orientation Field Are Used As Image Features. A Bimodal Gaussian Mixture Model (GMM) Represents Distributions Of Gabor Features Of Normal Skin Versus Skin Imperfections. Then, A Markov Random Field Model Is Used To Incorporate The Spatial Relationships Among Neighboring Pixels For Their GMM Distributions And Texture Orientations. An Expectation-maximization Algorithm Then Classifies Skin Versus Skin Wrinkles/imperfections. Once Detected Automatically, Wrinkles/imperfections Are Removed Completely Instead Of Being Blended Or Blurred. We Propose An Exemplar-based Constrained Texture Synthesis Algorithm To Inpaint Irregularly Shaped Gaps Left By The Removal Of Detected Wrinkles/imperfections. We Present Results Conducted On Images Downloaded From The Internet To Show The Efficacy Of Our Algorithms.

Hierarical Super Resolution Image Inpainting

This Paper Introduces A Novel Framework For Example-based Inpainting. It Consists In Performing First The Inpainting On A Coarse Version Of The Input Image. A Hierarchical Super-resolution Algorithm Is Then Used To Recover Details On The Missing Areas. The Advantage Of This Approach Is That It Is Easier To Inpaint Low-resolution Pictures Than High Resolution Ones. The Gain Is Both In Terms Of Computational Complexity And Visual Quality. However, To Be Less Sensitive To The Parameter Setting Of The Inpainting Method, The Low-resolution Input Picture Is In Painted Several Times With Different Configurations. Results Are Efficiently Combined With Loopy Belief Propagation And Details Are Recovered By A Single Image Super-resolution Algorithm. Experimental Results In A Context Of Image Editing And Texture Synthesis Demonstrate The Effectiveness Of The Proposed Method. Results Are Compared To Five State-of-the-art Inpainting Methods.

Handwritten Numeral Pattern Recognition Using Neural Network

Unconstrained Handwritten Numeral Recognition Has Been A Recent Research Area From Last Few Decades. Handwritten Numeral Recognition Approach Is Used In Many Fields Like Bank Checks, Car Plates, ZIP Code Recognition, Mail Sorting, Reading Of Commercial Forms Etc. This Paper Presents A Technique To Recognize Handwritten Numerals, Taken From Different Pupils Of Different Ages Including Male, Female, Right And Left Handed Persons. 340 Numerals Were Collected From 34 People For Sample Creation. Conjugate Gradient Descent Back-propagation Algorithm (CGD-BP) Is Used For Training Purpose. CGD-BP Differs From Primary Back-propagation Algorithm In The Sense That Conjugate Algorithms Perform Line Search Along Different Directions Which Produce Faster Convergence Than Primary Back Propagation. Percentage Recognition Accuracy (PRA) And Mean Square Error (MSE) Have Been Taken To Estimate The Efficiency Of Neural Network To Recognize The Numerals.

Fuzzy C-Means Clustering With Local Information And Kernel Metric For Image Segmentation

In This Paper, We Present An Improved Fuzzy C-means (FCM) Algorithm For Image Segmentation By Introducing A Tradeoff Weighted Fuzzy Factor And A Kernel Metric. The Tradeoff Weighted Fuzzy Factor Depends On The Space Distance Of All Neighboring Pixels And Their Gray-level Difference Simultaneously. By Using This Factor, The New Algorithm Can Accurately Estimate The Damping Extent Of Neighboring Pixels. In Order To Further Enhance Its Robustness To Noise And Outliers, We Introduce A Kernel Distance Measure To Its Objective Function. The New Algorithm Adaptively Determines The Kernel Parameter By Using A Fast Bandwidth Selection Rule Based On The Distance Variance Of All Data Points In The Collection. Furthermore, The Tradeoff Weighted Fuzzy Factor And The Kernel Distance Measure Are Both Parameter Free. Experimental Results On Synthetic And Real Images Show That The New Algorithm Is Effective And Efficient, And Is Relatively Independent Of This Type Of Noise.

Lossless Image Compression Using Matlab

Lossless Compression Is Necessary For Many High Performance Applications Such As Geophysics, Telemetry, Nondestructive Evaluation, And Medical Imaging, Which Require Exact Recoveries Of Original Images. Lossless Image Compression Can Be Always Modeled As A Two-stage Procedure: Decorrelation And Entropy Coding. The First Stage Removes Spatial Redundancy Or Inter-pixel Redundancy By Means Of Run-length Coding, SCAN Language Based Methodology, Predictive Techniques, Transform Techniques, And Other Types Of Decorrelation Techniques. The Second Stage, Which Includes Huffman Coding, Arithmetic Coding, And LZW, Removes Coding Redundancy. Nowadays, The Performances Of Entropy Coding Techniques Are Very Close To Its Theoretical Bound, And Thus More Research Activities Concentrate On Decorrelation Stage. JPEG-LS And JPEG-2000 Are The Latest ISO/ITU Standards For Compressing Continuous-tone Images. JPEG-LS Is Based On LOCO-I Algorithm, Which Was Chosen To Incorporate The Standard Due To Its Good Balance Between Complexity And Efficiency. Another Technique Proposed For JPEG-LS Was CALIC. JPEG-2000 Was Designed With The Main Objective Of Providing Efficient Compression For A Wide Range Of Compression Ratios.

Removing Photography Artifacts Using Gradient Projection And Flash-Exposure Sampling

Flash Images Are Known To Suffer From Several Problems: Saturation Of Nearby Objects, Poor Illumination Of Distant Objects, Reflections Of Objects Strongly Lit By The Flash And Strong Highlights Due To The Reflection Of Flash Itself By Glossy Surfaces. We Propose To Use A Flash And No-flash (ambient) Image Pair To Produce Better Flash Images. We Present A Novel Gradient Projection Scheme Based On A Gradient Coherence Model That Allows Removal Of Reflections And Highlights From Flash Images. We Also Present A Brightness-ratio Based Algorithm That Allows Us To Compensate For The Falloff In The Flash Image Brightness Due To Depth. In Several Practical Scenarios, The Quality Of Flash/no-flash Images May Be Limited In Terms Of Dynamic Range. In Such Cases, We Advocate Using Several Images Taken Under Different Flash Intensities And Exposures. We Analyze The Flash Intensity-exposure Space And Propose A Method For Adaptively Sampling This Space So As To Minimize The Number Of Captured Images For Any Given Scene. We Present Several Experimental Results That Demonstrate The Ability Of Our Algorithms To Produce Improved Flash Images.

A Dictionary Learning Approach For Poisson Image Deblurring

The Restoration Of Images Corrupted By Blur And Poisson Noise Is A Key Issue In Medical And Biological Image Processing. While Most Existing Methods Are Based On Variational Models, Generally Derived From A Maximum A Posteriori (MAP) Formulation, Recently Sparse Representations Of Images Have Shown To Be Efficient Approaches For Image Recovery. Following This Idea, We Propose In This Paper A Model Containing Three Terms: A Patch-based Sparse Representation Prior Over A Learned Dictionary, The Pixel-based Total Variation Regularization Term And A Data-fidelity Term Capturing The Statistics Of Poisson Noise. The Resulting Optimization Problem Can Be Solved By An Alternating Minimization Technique Combined With Variable Splitting. Extensive Experimental Results Suggest That In Terms Of Visual Quality, Peak Signal-to-noise Ratio Value And The Method Noise, The Proposed Algorithm Outperforms State-of-the-art Methods.

Local Disparity Estimation With Three-Moded Cross Census And Advanced Support Weight

The Classical Local Disparity Methods Use Simple And Efficient Structure To Reduce The Computation Complexity. To Increase The Accuracy Of The Disparity Map, New Local Methods Utilize Additional Processing Steps Such As Iteration, Segmentation, Calibration And Propagation, Similar To Global Methods. In This Paper, We Present An Efficient One-pass Local Method With No Iteration. The Proposed Method Is Also Extended To Video Disparity Estimation By Using Motion Information As Well As Imposing Spatial Temporal Consistency. In Local Method, The Accuracy Of Stereo Matching Depends On Precise Similarity Measure And Proper Support Window. For The Accuracy Of Similarity Measure, We Propose A Novel Three-moded Cross Census Transform With A Noise Buffer, Which Increases The Robustness To Image Noise In Flat Areas. The Proposed Similarity Measure Can Be Used In The Same Form In Both Stereo Images And Videos. We Further Improve The Reliability Of The Aggregation By Adopting The Advanced Support Weight And Incorporating Motion Flow To Achieve Better Depth Map Near Moving Edges In Video Scene. The Experimental Results Show That The Proposed Method Is The Best Performing Local Method On The Middlebury Stereo Benchmark Test And Outperforms The Other State-of-the-art Methods On Video Disparity Evaluation.

Texture Based Image Retrieval Using Framelet Transform–Gray Level Co Occurrence Matrix GL

This Paper Presents A Novel Content Based Image Retrieval (CBIR) System Based On Framelet Transform Combined With Gray Level Co-occurrence Matrix (GLCM).The Proposed Method Is Shift Invariant Which Captured Edge Information More Accurately Than Conventional Transform Domain Methods As Well As Able To Handle Images Of Arbitrary Size. Current System Uses Texture As A Visual Content For Feature Extraction. First Texture Features Are Obtained By Computing The Energy, Standard Deviation And Mean On Each Sub Band Of The Framelet Transform Decomposed Image .Then A New Method As A Combination Of The Framelet Transform-Gray Level Co-occurrence Matrix (GLCM) Is Applied. The Results Of The Proposed Methods Are Compared With Conventional Methods. We Have Done The Comparison Of Results Of These Two Methods For Image Retrieval. Euclidean Distance, Canberra Distance, City Black Distance Is Used As Similarity Measure In The Proposed CBIR System.

A Novel Joint Data-Hiding And Compression Scheme Based On SMVQ And Image Inpainting

In This Paper, We Propose A Novel Joint Data-hiding And Compression Scheme For Digital Images Using Side Match Vector Quantization (SMVQ) And Image Inpainting. The Two Functions Of Data Hiding And Image Compression Can Be Integrated Into One Single Module Seamlessly. On The Sender Side, Except For The Blocks In The Leftmost And Topmost Of The Image, Each Of The Other Residual Blocks In Raster-scanning Order Can Be Embedded With Secret Data And Compressed Simultaneously By SMVQ Or Image Inpainting Adaptively According To The Current Embedding Bit. VQ Is Also Utilized For Some Complex Blocks To Control The Visual Distortion And Error Diffusion Caused By The Progressive Compression. After Segmenting The Image Compressed Codes Into A Series Of Sections By The Indicator Bits, The Receiver Can Achieve The Extraction Of Secret Bits And Image Decompression Successfully According To The Index Values In The Segmented Sections. Experimental Results Demonstrate The Effectiveness Of The Proposed Scheme.

Adaptive Interpolation Algorithm For Real-Time Image Resizing

In This Paper, An Adaptive Interpolation Algorithm Is Presented Based On The Newton Polynomial To Improve The Limitation Of The Traditional Algorithm For Image Resizing. The Second-order Difference Of Adjacent Pixels Gray Values Shows The Relativity Among The Pixels. Accordingly, The Adaptive Function For Image Interpolation Is Deduced According To Both This Relativity And The Classical Newton Polynomial. Then The Efficiency Of Our Method Is Compared With That Of The Traditional Algorithm For Image Resizing In Matlab. Furthermore, The Implementation Circuit Architecture Is Devised By Three Stage Paralleling Pipelines For The Adaptive Image Resizing Algorithm And Is Verified In FPGA (field Programmable Gate Array). The Experimental Results Show That Our Proposed Algorithm Excels The Bicubic Interpolation In Visual Effect, And Has A Lower Complexity. Therefore, The Algorithm Adapts To Real-time Image Resizing.

Images As Occlusions Of Textures- A Framework For Segmentation

We Propose A New Mathematical And Algorithmic Framework For Unsupervised Image Segmentation, Which Is A Critical Step In A Wide Variety Of Image Processing Applications. We Have Found That Most Existing Segmentation Methods Are Not Successful On Histopathology Images, Which Prompted Us To Investigate Segmentation Of A Broader Class Of Images, Namely Those Without Clear Edges Between The Regions To Be Segmented. We Model These Images As Occlusions Of Random Images, Which We Call Textures, And Show That Local Histograms Are A Useful Tool For Segmenting Them. Based On Our Theoretical Results, We Describe A Flexible Segmentation Framework That Draws On Existing Work On Nonnegative Matrix Factorization And Image Deconvolution. Results On Synthetic Texture Mosaics And Real Histology Images Show The Promise Of The Method.

LBP-Based Edge-Texture Features For Object Recognition

This Paper Proposes Two Sets Of Novel Edge-texture Features, Discriminative Robust Local Binary Pattern (DRLBP) And Ternary Pattern (DRLTP), For Object Recognition. By Investigating The Limitations Of Local Binary Pattern (LBP), Local Ternary Pattern (LTP) And Robust LBP (RLBP), DRLBP And DRLTP Are Proposed As New Features. They Solve The Problem Of Discrimination Between A Bright Object Against A Dark Background And Vice-versa Inherent In LBP And LTP. DRLBP Also Resolves The Problem Of RLBP Whereby LBP Codes And Their Complements In The Same Block Are Mapped To The Same Code. Furthermore, The Proposed Features Retain Contrast Information Necessary For Proper Representation Of Object Contours That LBP, LTP, And RLBP Discard. Our Proposed Features Are Tested On Seven Challenging Data Sets: INRIA Human, Caltech Pedestrian, UIUC Car, Caltech 101, Caltech 256, Brodatz, And KTH-TIPS2- A. Results Demonstrate That The Proposed Features Outperform The Compared Approaches On Most Data Sets

Mixed Noise Removal By Weighted Encoding With Sparse Nonlocal Regularization

Mixed Noise Removal From Natural Images Is A Challenging Task Since The Noise Distribution Usually Does Not Have A Parametric Model And Has A Heavy Tail. One Typical Kind Of Mixed Noise Is Additive White Gaussian Noise (AWGN) Coupled With Impulse Noise (IN). Many Mixed Noise Removal Methods Are Detection Based Methods. They First Detect The Locations Of IN Pixels And Then Remove The Mixed Noise. However, Such Methods Tend To Generate Many Artifacts When The Mixed Noise Is Strong. In This Paper, We Propose A Simple Yet Effective Method, Namely Weighted Encoding With Sparse Nonlocal Regularization (WESNR), For Mixed Noise Removal. In WESNR, There Is Not An Explicit Step Of Impulse Pixel Detection; Instead, Soft Impulse Pixel Detection Via Weighted Encoding Is Used To Deal With IN And AWGN Simultaneously. Meanwhile, The Image Sparsity Prior And Nonlocal Self-similarity Prior Are Integrated Into A Regularization Term And Introduced Into The Variational Encoding Framework. Experimental Results Show That The Proposed WESNR Method Achieves Leading Mixed Noise Removal Performance In Terms Of Both Quantitative Measures And Visual Quality

Corner Detection And Classification Using Anisotropic Directional Derivative Representations

This Paper Proposes A Corner Detector And Classifier Using Anisotropic Directional Derivative (ANDD) Representations. The ANDD Representation At A Pixel Is A Function Of The Oriented Angle And Characterizes The Local Directional Gray Scale Variation Around The Pixel. The Proposed Corner Detector Fuses The Ideas Of The Contour- And Intensity-based Detection. It Consists Of Three Cascaded Blocks. First, The Edge Map Of An Image Is Obtained By The Canny Detector And From Which Contours Are Extracted And Patched. Next, The ANDD Representation At Each Pixel On Contours Is Calculated And Normalized By Its Maximal Magnitude. The Area Surrounded By The Normalized ANDD Representation Forms A New Corner Measure. Finally, The No Maximum Suppression And Thresholding Are Operated On Each Contour To Find Corners In Terms Of The Corner Measure. Moreover, A Corner Classifier Based On The Peak Number Of The ANDD Representation Is Given. Experiments Are Made To Evaluate The Proposed Detector And Classifier. The Proposed Detector Is Competitive With The Two Recent State-of-the-art Corner Detectors, The He & Yung Detector And CPDA Detector, In Detection Capability And Attains Higher Repeatability Under Affine Transforms. The Proposed Classifier Can Discriminate Effectively Simple Corners, Y-type Corners, And Higher Order Corners

Novel True-Motion Estimation Algorithm And Its Application To Motion-Compensated Temporal Frame Interpolation

A New Low-complexity True-motion Estimation (TME) Algorithm Is Proposed For Video Processing Applications, Such As Motion-compensated Temporal Frame Interpolation (MCTFI), Or Motion-compensated Frame Rate Upconversion (MCFRUC). Regular Motion Estimation (ME), Which Is Often Used In Video Coding, Aims To Find The Motion Vectors (MVs) To Reduce The Temporal Redundancy, Whereas TME Targets To Track The Projected Object Motion As Close As Possible. TME Is Obtained By Imposing Implicit And/or Explicit Smoothness Constraints On Block Matching Algorithm (BMA). To Produce Better Quality Interpolated Frames, Dense Motion Field At Interpolation Time Is Obtained For Both Forward And Backward MVs; Then, Bidirectional Motion Compensation Using Forward And Backward MVs Is Applied By Mixing Both Elegantly. Finally, The Performance Of The Proposed Algorithm For MCTFI Is Demonstrated Against Recently Proposed Methods And Smoothness Constraint Optical Flow Employed By A Professional Video Production Suite. Experimental Results Show That The Quality Of The Interpolated Frames Using The Proposed Method Is Better Than The Compared MCFRUC Techniques

Object Segmentation Of Database Images By Dual Multiscale Morphological Reconstructions And Retrieval Applications

Processing Images For Specific Targets On A Large Scale Has To Handle Various Kinds Of Contents With Regular Processing Steps. To Segment Objects In One Image, We Utilized Dual MultiScalE Gray Levelmorphologically Open And Close RecoNstructions (SEGON) To Build A Background (BG) Gray-level Variation Mesh, Which Can Help To Identify BG And Object Regions. It Was Developed From A Macroscopic Perspective On Image BG Gray Levels And Implemented Using Standard Procedures, Thus Robustly Dealing With Large-scale Database Images. The Image Segmentation Capability Of Existing Methods Can Be Exploited By The BG Mesh To Improve Object Segmentation Accuracy. To Evaluate The Segmentation Accuracy, The Probability Of Coherent Segmentation Labeling, I.e., The Normalized Probability Random Index (PRI), Between A Computer-segmented Image And The Hand-labeled One Is Computed For Comparisons. Content-based Image Retrieval (CBIR) Was Carried Out To Evaluate The Object Segmentation Capability In Dealing With Large-scale Database Images. Retrieval Precision–recall (PR) And Rank Performances, With And Without SEGON, Were Compared. For Multi-instance Retrieval With Shape Feature, Adobos Was Used To Select Salient Common Feature Elements. For Color Features, The Histogram Intersection Between Two Scalable HSV Descriptors Was Calculated, And The Mean Feature Vector Was Used For Multi-instance Retrieval. The Distance Measure For Color Feature Can Be Adapted When Both Positive And Negative Queries Are Provided The Normalized Correlation Coefficient Of Features Among Query Samples Was Computed To Integrate The Similarity Ranks Of Different Features In Order To Perform Multi-instance With Multifeature Query.

A Synopsis Of Recentwork In Edge Detection Using The DWT

Automatic Edge Detection Is A Highly Researched Field Because It Is Used In Many Different Applications In Image Processing, Such As Diagnosis In Medical Imaging, Topographical Recognition And Automated Inspection Of Machine Assemblies. Historically, The Discrete Wavelet Transform (DWT) Has Been A Successful Technique Used In Edge Detection. The Contributions Of New, Recent Work In This Area Are Examined And Summarized Concisely. Utilizing Multiple Phases, Such As De-noising, Preprocessing, Thresholding Coefficients, Smoothing, And Post Processing, Are Suggested For Use With Multiple Iterations Of The DWT In This Research. The DWT Is Combined With Various Other Methods For An Optimal Solution For The Edge Detection Problem. This Synopsis Consolidates Recent, Related Work Into One Source.