Lecture Note
University:
Boston UniversityCourse:
MET CS 201 | Introduction to ProgrammingAcademic year:
2023
Views:
174
Pages:
27
Author:
granitt3c2r
){ chomp($line); @fields=split(/:/,$line); foreach $gid (@GID){ if ($fields[3] ==$gid){ print "$line\n"; } } } close(P); gid's of interest open password file split up each line along : and assign to array @fields loop over @GID and check close password file 12 Ok, now what? Contained in each line is the home directory of the given user, say /home/username As such, their .login file is /home/username/.login To check the access time, of this file, we can use the -A file test operator which returns the number of days since the given file (or directory) was accessed. So we will use a conditional of the form: if(-A "/home/username/.login" >180){ # lock their account } So, here is how the final script might go. #!/usr/local/bin/perl @GID=("25000","25001","25002"); $noshell="/bin/nosh"; # void shell prevents login system("cp /etc/passwd /etc/passwd.save"); # safety first! open(P,"/etc/passwd"); open(NP,">/etc/newpasswd"); while($line=
){ chomp($line); @fields=split(/:/,$line); foreach $gid (@GID){ if ($fields[3] ==$gid){ $homedir=$fields[5]; if(-A "$homedir/.login" > 180){ $line=~s/$fields[6]/$noshell/; } } } print NP "$line\n"; } close(P); close(NP); system("rm /etc/passwd;mv /etc/newpasswd /etc/passwd"); 13 Let's break this down. set up gid array #!/usr/local/bin/perl @GID=("25000","25001","25002"); $noshell="/bin/nosh"; system("cp /etc/passwd /etc/passwd.save"); open(P,"/etc/passwd"); open(NP,">/etc/newpasswd"); setting a user's shell to /bin/nosh makes logins impossible make a backup of /etc/passwd this is the modified version of /etc/passwd while($line=
){
chomp($line);
@fields=split(/:/,$line);
Read in /etc/passwd one line at time and split the fields up along :
check each
line for one of
the gid's we want
pick out home dir.
foreach $gid (@GID){
if ($fields[3] ==$gid){
$homedir=$fields[5];
if(-A "$homedir/.login" > 180){
$line=~s/$fields[6]/$noshell/;
}
}
}
print NP "$line\n";
check if
.login has not
been accessed
for over 180 days
if so, then replace shell
( $fields[6] ) with "/bin/nosh"
regardless of whether we modified the
user's shell, write the line to the file /etc/newpassd
14 }
close(P);
close(NP);
system("rm /etc/passwd;mv /etc/newpasswd /etc/passwd");
Once done, close both /etc/passwd, and /etc/newpasswd
Then remove the old /etc/passwd and replace it with the modified version.
Note, we made a backup of /etc/passwd beforehand in case something went
wrong while this script was running.
To clarify, if /home/fred/.login has not been accessed for more than 6 months
then this what happens to his entry in /etc/passwd
fred:x:3216:25000:Fred Flintstone:/home/fred:/bin/bash
becomes
$fields[5]
$fields[6]
fred:x:3216:25000:Fred Flintstone:/home/fred:/bin/nosh
15 Perl and the Web
Perl is used in many ways for web applications, including the management
of web servers as well as CGI scripting and more.
Our first example will involve the analysis of web server logs.
In particular we will show how to parse the log files and retrieve the
important statistical information contained therein, such as the addresses
of those sites connecting to the server as well as content downloaded etc.
This is not strictly speaking a web-centric demonstration, since it will
be more about crafting regular expressions to analyze text data, nonetheless
it’s as good an example of this as any other so...
The basic information that is recorded in any web 'event' which a server
might record are:
• the address of the incoming connection (i.e. who visited)
• the time of the connection
• what content they downloaded
Additionally, one may record other data such as:
• any site they came to yours by via a link
• the hardware/software combination they use
(e.g. Unix, Windows, Netscape, IE)
16 Ex: A typical entry in an access_log file:
168.122.230.172 - - [16/Feb/2001:08:42:52 -0500] "GET /people/tkohl/teaching/sprin
g2001/secant.pdf HTTP/1.1" 200 0 "http://math.bu.edu/people/tkohl/teaching/spri
ng2001/MA121.html" "Mozilla/4.0 (compatible; MSIE 5.5; Windows 98)"
168.122.230.172
IP address of visitor
[16/Feb/2001:08:42:52 -0500]
time
"GET /people/tkohl/teaching/spring2001/secant.pdf HTTP/1.1"
200 0
server response code
"http://math.bu.edu/people/tkohl/teaching/spring2001/MA121.html"
"Mozilla/4.0 (compatible; MSIE 5.5; Windows 98)"
content they retrieved
referrer
client software and archictecutre
168.122.230.172 - - [16/Feb/2001:08:42:52 -0500] "GET /people/tkohl/teaching/sprin
g2001/secant.pdf HTTP/1.1" 200 0 "http://math.bu.edu/people/tkohl/teaching/spri
ng2001/MA121.html" "Mozilla/4.0 (compatible; MSIE 5.5; Windows 98)"
In order to parse this file and extract the relevant information, say for some statistical
analysis or whatever, we need to describe log entries with a regular expression
and extract the different components.
Here is a subroutine for parsing entries such as the one above.
sub parse_log{
my $entry = $_[0];
$entry =~ /([\d\.]+) \- \- (\[[^\]]+\]) \"([^\"]+)\" (\d+ \d+)
\"([^\"]+)\" \"([^\"]+)\"/;
return ($1,$2,$3,$4,$5,$6);
}
Let's examine the pattern to clarify what's going on.
17 168.122.230.172 - - [16/Feb/2001:08:42:52 -0500] "GET /people/tkohl/teaching/sprin
g2001/secant.pdf HTTP/1.1" 200 0 "http://math.bu.edu/people/tkohl/teaching/spri
ng2001/MA121.html" "Mozilla/4.0 (compatible; MSIE 5.5; Windows 98)"
Discounting the spaces and dashes between the entries, here are the patterns describing
the portions to memorize.
([\d\.]+)
ip address
(\[[^\]]+\])
date (including the brackets
\"([^\"]+)\"
content downloaded
(\d+ \d+)
status code
\"([^\"]+)\"
referrer
\"([^\"]+)\"/
client info
([\d\.]+)
IP address
one or more occurrences of the class of digits or periods .
(\[[^\]]+\])
a real [
date
a real ]
the class of things other than ]
(one or more occurrences)
18 \"([^\"]+)\"
content downloaded
referrer
client information
class of things other than literal "
one or more occurrences
look for literal "
(\d+ \d+)
status code
two numbers with a space in-between
So now, the components of the log entry are returned as an array
from the parse_log function.
So we might use it in a larger script as follows:
open(LOG,"/usr/local/apache/logs/access_log");
while($line= Applied Perl Get your assignment done in just 3 hours. Quick, easy, and available 24/7.
Report
Tell us what’s wrong with it:
Thanks, got it! Our EduBirdie Experts Are Here for You 24/7! Just fill out a form and let us know how we can assist you. Enter your email below and get instant access to your documentInstruction Begins
what we’re after
So we can modify our script, to, in fact, retrieve this URL and
then do some custom filtering of the data.
#!/usr/bin/perl
use LWP::Simple;
$URL="http://www.bu.edu/reg/cal0405.htm";
@DATA=split(/\n/,get($URL));
foreach (@DATA){
if(/\\(.*)\<\/font\>/){
$item=$1;
print "$item\n";
}
}
which, when run yields
Instruction Begins
Wednesday, May 19, 2004
Holiday, Classes Suspended
Monday, May 31, 2004
Instruction Ends
Wednesday, June 30, 2004
Instruction Begins
Tuesday, July 6, 2004
Instruction Ends
Friday, August 13, 2004
.
. etc
what we want
Let’s add a line between each logical
entry.
21 #!/usr/bin/perl
use LWP::Simple;
$URL="http://www.bu.edu/reg/cal0405.htm";
@DATA=split(/\n/,get($URL));
foreach (@DATA){
if(/\ \(.*)\<\/font\>/){
$item=$1;
print "$item\n";
($item=~/200(4|5)/) && (print ”\n”);
}
}
And now the output looks a bit neater:
issue a newline if the item
ends in 2004 or 2005
Instruction Begins
Wednesday, May 19, 2004
Holiday, Classes Suspended
Monday, May 31, 2004
Instruction Ends
Wednesday, June 30, 2004
.. Etc.
Of course, we could look closer at the original web page and observe that
there is a link to a PDF version of the calendar!
Perhaps we could
grab just this file
and put it in our
home directory.
22 The point in both cases is that these tools can give one the power
to extract data (potentially very volatile data) from a remote site
and use it in our own scripts, perhaps with a bit of filtering on our
part, but this is easy when using Perl!
23 Text Processing
In this example, we will analyze the text in a small book and create
an index of the words in the book and how often they occur.
The first part will be to actually obtain a small text to analyze.
#!/usr/bin/perl
use LWP::Simple;
$URL="ftp://nic.funet.fi/pub/doc/literary/etext/flatland.txt.gz";
open(F,">./flatland.txt.gz");
print F get($URL);
close(F);
(!(-e "./flatland.txt")) && system("gunzip ./flatland.txt.gz");
We use the LWP module to retrieve the compressed text of the book Flatland
which we download to the current directory and then uncompress using the
‘gunzip’ command for uncompressing .gz files.
On a Windows system, you can just download the file and uncompress it
manually.
Next comes the actual reading and indexing of the words in the text.
open the file for reading
filter out any punctuation
and non word characters
and replace every occurrence
of them with spaces
open(F,"./flatland.txt");
while($line= Related Documents
Recommended Documents
We will moderate it soon!
Free up your schedule!
Take 5 seconds to unlock