Sponsored Content
Full Discussion: print metadata to jpg
Top Forums Shell Programming and Scripting print metadata to jpg Post 302484116 by rdcwayx on Wednesday 29th of December 2010 08:28:51 PM
Old 12-29-2010
Code:
source=ML.JPG
target=target.JPG

set -- $(identify $source |awk 'NR==1{split($3,a,"x"); print a[1],a[2]}')   # get the image's size, width=$1, length=$2, you need adjust for your image, depand the output format of command identify
                   
jhead $source |grep "GPS" |awk -v width=$1 -v length=$2 -v sfile=$source -v tfile=$target '
NR==1{Latitude=$0}NR==2{Longitude=$2} NR==3{Altitude=$0}
END{print "convert -pointsize 18 -font /path/to/font.ttf -fill white -stroke black -strokewidth 1 -draw \" "Latitude, " 10," length-72 "\" -draw \" "Longitude, " 10," length-54 "\" -draw \" "Altitude, "  10," length-36 "\" ",sfile, tfile}'

convert is imagemagick command. if output is fine, just copy the output and run it directly.

Last edited by rdcwayx; 12-29-2010 at 09:35 PM..
This User Gave Thanks to rdcwayx For This Post:
 

5 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Print the content of a directory in jpg file

Is there any possible way to print the contents of a directory to a .jpg file? I have a list of thumbnails (e-books) which I want to share (+500) but I don't know how to make this. I would appreciate a lot any comments regarding this issue. (4 Replies)
Discussion started by: agasamapetilon
4 Replies

2. Programming

Best way to dump metadata to file: when and by who?

Hi, my application (actually library) indexes a file of many GB producing tables (arrays of offset and length of the data indexed) for later reuse. The tables produced are pretty big too, so big that I ran out of memory in my process (3GB limit), when indexing more than 8GB of file or so.... (9 Replies)
Discussion started by: emitrax
9 Replies

3. Shell Programming and Scripting

Rename all ".JPG" files to ".jpg" under all subfolders...

Hi, Dear all: One question ! ^_^ I'm using bash under Ubuntu 9.10. My question is not to rename all ".JPG" files to ".jpg" in a single folder, but to rename all ".JPG" files to ".jpg" in all subfolders. To rename all ".JPG" to ".jpg" in a single folder, for x in *.JPG; do mv "$x"... (7 Replies)
Discussion started by: jiapei100
7 Replies

4. UNIX for Advanced & Expert Users

LVM - restore metadata on other disk

Hi guys, I would like to ask your opinion about my theory, how to fix my broken LVM without risking any data loss. I use Archlinux at home. I just love this distro, even it gives me a lots of work (particularly after system updates). Basic system spec: AMD FX(tm)-6100 Six-Core Processor... (1 Reply)
Discussion started by: lyynxxx
1 Replies

5. UNIX and Linux Applications

About gvfsd-metadata

I need a hint about gvfsd-metadata using mate on bsd. Or dual-core cpu, quad-core cpu ore an old laptop single core, the gvfsd is an obstacle and does not accelerate anything, vice versa, it slows down many processes, coming from gnome. So someone can give me a hint how to wipe it out for good? I... (1 Reply)
Discussion started by: 1in10
1 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 01:01 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy