I'm running a curl command in bash, but the & in the middle causes the second half of the line to run in the background, here's what I'm trying to do:
lat="37.451"
lon="-122.18"
url="http://ws.geonames.org/findNearestAddress?lat=$lat&lng=$lon"
curl -s "$url"
I tried escaping the & with \&,... (4 Replies)
I am trying to find a way to test some code, but I need to rewrite a specific URL only from a specific HTTP_HOST
The call goes out to
http://SUB.DOMAIN.COM/showAssignment/7bde10b45efdd7a97629ef2fe01f7303/jsmodule/Nevow.Athena
The ID in the middle is always random due to the cookie.
I... (5 Replies)
Hi,
I am new to shell-scripting, and doing a lot of reading. I am having some trouble getting started with a simple testing of scripting. I have been experimenting with if, loops, for, test, etc., but still unsure. I seem to have the hang of it when it comes to creating a single file or... (6 Replies)
#!/bin/bash
timevar=`date +%F_”%H_%M”` #-- > Storing Date and Time in a Variable
get_contents=`cat urls.txt` #-- > Getting content of website from file. Note the file should not contain any http:// as its already been taken care of
######### Next Section Does all the processing #########
for i... (0 Replies)
#!/bin/bash
timevar=`date +%d-%m-%Y_%H.%M.%S` #-- > Storing Date and Time in a Variable
get_contents=`cat urls.txt` #-- > Getting content of website from file. Note the file should not contain any http:// as its already been taken care of
echo "Datae-time URL Status code Report" >... (2 Replies)
Hi All,
I have some HTML files and my requirement is to extract all the anchor text words from the HTML files along with their URLs and store the result in a separate text file separated by space. For example, <a href="/kid/stay_healthy/">Staying Healthy</a>
which has /kid/stay_healthy/ as... (3 Replies)
Here is what I have so far:
find . -name "*php*" -or -name "*htm*" | xargs grep -i iframe | awk -F'"' '/<iframe*/{gsub(/.\*iframe>/,"\"");print $2}'
Here is an example content of a PHP or HTM(HTML) file:
<iframe src="http://ADDRESS_1/?click=5BBB08\" width=1 height=1... (18 Replies)
Hello,
Am very new to perl , please help me here !!
I need help in reading a URL from command line using PERL:: Mechanize and needs all the contents from the URL to get into a file.
below is the script which i have written so far ,
#!/usr/bin/perl
use LWP::UserAgent;
use... (2 Replies)
Discussion started by: scott_cog
2 Replies
LEARN ABOUT DEBIAN
sg_get_page_stats
sg_get_page_stats(3) Library Functions Manual sg_get_page_stats(3)NAME
sg_get_page_stats, sg_get_page_stats_diff - get paging statistics
SYNOPSIS
#include <statgrab.h>
sg_page_stats *sg_get_page_stats(void);
sg_page_stats *sg_get_page_stats_diff(void);
DESCRIPTION
sg_get_page_stats and sg_get_page_stats_diff both return a pointer to a static buffer of type sg_page_stats.
sg_get_page_stats will return the number of pages the system has paged in and out since bootup. sg_get_page_stats_diff will return the
difference since last time it was called. If it has not been called before, it will return sg_get_page_stats.
RETURN VALUES
typedef struct{
long long pages_pagein;
long long pages_pageout;
time_t systime;
}sg_page_stats;
pages_pagein
The number of pages swapped into memory.
pages_pageout
The number of pages swapped out of memory (to swap).
systime
The time period over which pages_pagein and pages_pageout were transferred.
BUGS
Solaris doesn't seem to report accurately. It reports the number of pages swapped into memory, not necessarily from swap. This feature
isn't deemed entirely reliable.
SEE ALSO statgrab(3)WEBSITE
http://www.i-scream.org/libstatgrab/
i-scream $Date: 2005/04/25 11:25:45 $ sg_get_page_stats(3)