HTTPS

From Rosetta Code
(Redirected from HTTPS Request)
Jump to: navigation, search
Task
HTTPS
You are encouraged to solve this task according to the task description, using any language you may know.

Print an HTTPS URL's content to the console. Checking the host certificate for validity is recommended. The client should not authenticate itself to the server — the webpage https://sourceforge.net/ supports that access policy — as that is the subject of other tasks.

Readers may wish to contrast with the HTTP Request task, and also the task on HTTPS request with authentication.

Contents

[edit] Ada

Library: AWS

Exactly the same as the HTTP task, assuming you compiled AWS with openssl support.

 
with AWS.Client;
with AWS.Response;
with Ada.Text_IO; use Ada.Text_IO;
procedure GetHttps is
begin
Put_Line (AWS.Response.Message_Body (AWS.Client.Get (
URL => "https://sourceforge.net/")));
end GetHttps;
 

[edit] AutoHotkey

Library: wininet
 
URL := "https://sourceforge.net/"
WININET_Init()
msgbox % html := UrlGetContents(URL)
WININET_UnInit()
return
#include urlgetcontents.ahk
#include wininet.ahk
 

[edit] Batch File

 
:: Must have curl.exe
curl.exe -k -s -L https://sourceforge.net/
 

[edit] C

Library: libcurl
 
#include <stdio.h>
#include <stdlib.h>
#include <curl/curl.h>
 
int
main(void)
{
CURL *curl;
char buffer[CURL_ERROR_SIZE];
 
if ((curl = curl_easy_init()) != NULL) {
curl_easy_setopt(curl, CURLOPT_URL, "https://sourceforge.net/");
curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1);
curl_easy_setopt(curl, CURLOPT_ERRORBUFFER, buffer);
if (curl_easy_perform(curl) != CURLE_OK) {
fprintf(stderr, "%s\n", buffer);
return EXIT_FAILURE;
}
curl_easy_cleanup(curl);
}
return EXIT_SUCCESS;
}
 

[edit] C#

Works with: C sharp version 3.0
 
using System;
using System.Net;
 
class Program
{
static void Main(string[] args)
{
var client = new WebClient();
var data = client.DownloadString("https://www.google.com");
 
Console.WriteLine(data);
}
}
 

[edit] Clojure

Using the duck-streams as a convenient wrapper for Java's networking classes, grabbing the contents of an HTTPS URL is as easy as:

 
(use '[clojure.contrib.duck-streams :only (slurp*)])
(print (slurp* "https://sourceforge.net"))
 

The usual Java mechanisms can be used to manage acceptance of SSL certificates if required.

Works with: Clojure version 1.2
 
(print (slurp "https://sourceforge.net"))
 

[edit] Common Lisp

Library: DRAKMA

First grabbing the entire body as a string, and then by pulling from a stream. This is the same code as in HTTP Request; drakma:http-request supports SSL.

 
(defun wget-drakma-string (url &optional (out *standard-output*))
"Grab the body as a string, and write it to out."
(write-string (drakma:http-request url) out))
 
(defun wget-drakma-stream (url &optional (out *standard-output*))
"Grab the body as a stream, and write it to out."
(loop with body = (drakma:http-request url :want-stream t)
for line = (read-line body nil nil)
while line do (write-line line)
finally (close body)))
 
;; Use
(wget-drakma-stream "https://sourceforge.net")
 

[edit] Delphi

Library: OpenSSL
 
program ShowHTTPS;
 
{$APPTYPE CONSOLE}
 
uses IdHttp, IdSSLOpenSSL;
 
var
s: string;
lHTTP: TIdHTTP;
begin
lHTTP := TIdHTTP.Create(nil);
try
lHTTP.IOHandler := TIdSSLIOHandlerSocketOpenSSL.Create(lHTTP);
lHTTP.HandleRedirects := True;
s := lHTTP.Get('https://sourceforge.net/');
Writeln(s);
finally
lHTTP.Free;
end;
end.
 

[edit] Erlang

[edit] Synchronous

 
-module(main).
-export([main/1]).
 
main([Url|[]]) ->
inets:start(),
ssl:start(),
case http:request(get, {URL, []}, [{ssl,[{verify,0}]}], []) of
{ok, {_V, _H, Body}} -> io:fwrite("~p~n",[Body]);
{error, Res} -> io:fwrite("~p~n", [Res])
end.
 

[edit] Asynchronous

 
-module(main).
-export([main/1]).
 
main([Url|[]]) ->
inets:start(),
ssl:start(),
http:request(get, {Url, [] }, [{ssl,[{verify,0}]}], [{sync, false}]),
receive
{http, {_ReqId, Res}} -> io:fwrite("~p~n",[Res]);
_Any -> io:fwrite("Error: ~p~n",[_Any])
after 10000 -> io:fwrite("Timed out.~n",[])
end.
 

Using it

 
|escript ./req.erl https://sourceforge.net/
 

[edit] F#

The underlying .NET classes handle secure web connections the same way they manage insecure connections.

 
#light
let wget (url : string) =
let c = new System.Net.WebClient()
c.DownloadString(url)
 

[edit] Frink

 
print[read["https://sourceforge.net/"]
 

[edit] Go

 
package main
 
import (
"io"
"log"
"net/http"
"os"
)
 
func main() {
r, err := http.Get("https://sourceforge.net/")
if err != nil {
log.Fatalln(err)
}
io.Copy(os.Stdout, r.Body)
}
 

[edit] Groovy

 
new URL("https://sourceforge.net").eachLine { println it }
 

[edit] Haskell

Library: http-conduit
Works with: GHC version 7.4.1

This is just the example from Network.HTTP.Conduit, with the http URL replaced with an https one, since http-conduit natively supports https without needing any additional work.

#!/usr/bin/runhaskell
 
import Network.HTTP.Conduit
import qualified Data.ByteString.Lazy as L
import Network (withSocketsDo)
 
main = withSocketsDo
$ simpleHttp "https://sourceforge.net/" >>= L.putStr

[edit] Ioke

Translation of: Java
 
connection = URL new("https://sourceforge.net") openConnection
scanner = Scanner new(connection getInputStream)
 
while(scanner hasNext,
scanner next println
)
 

[edit] J

Using gethttp from Web Scraping

 
#page=: gethttp'https://sourceforge.net'
0
#page=: '--no-check-certificate' gethttp'https://sourceforge.net'
900
 

(We can not load the example page using https unless we disable certificate checking. The numbers are the number of characters retrieved.)

[edit] Java

Additional certificate information is available through the javax.net.ssl.HttpsURLConnection interface.

 
URL url = new URL("https://sourceforge.net");
HttpsURLConnection connection = (HttpsURLConnection) url.openConnection();
Scanner scanner = new Scanner(connection.getInputStream());
 
while (scanner.hasNext()) {
System.out.println(scanner.next());
}
 

[edit] JavaScript

 
(function(url,callback){//on some browsers you can check certificate information.
xhr=new XMLHttpRequest();
xhr.open('GET',url,true);
xhr.onreadystatechange=function(){if(xhr.readyState==xhr.DONE){callback(xhr)}};
xhr.send();
})('https://sourceforge.net',function(xhr){console.log(xhr.response)})
 

[edit] Lasso

local(x = curl('https://sourceforge.net'))
local(y = #x->result)
#y->asString

If a site with an invalid SSL Cert is encountered the curl type throws the following error:

Output:
FAILURE: 60 Peer certificate cannot be authenticated with given CA certificates

[edit] LSL

Virtually identical to the HTTP Task.

To test it yourself; rez a box on the ground, and add the following as a New Script.

string sURL = "https://SourceForge.Net/";
key kHttpRequestId;
default {
state_entry() {
kHttpRequestId = llHTTPRequest(sURL, [], "");
}
http_response(key kRequestId, integer iStatus, list lMetaData, string sBody) {
if(kRequestId==kHttpRequestId) {
llOwnerSay("Status="+(string)iStatus);
integer x = 0;
for(x=0 ; x<llGetListLength(lMetaData) ; x++) {
llOwnerSay("llList2String(lMetaData, "+(string)x+")="+llList2String(lMetaData, x));
}
list lBody = llParseString2List(sBody, ["\n"], []);
for(x=0 ; x<llGetListLength(lBody) ; x++) {
llOwnerSay("llList2String(lBody, "+(string)x+")="+llList2String(lBody, x));
}
}
}
}

Output:

Status=200
llList2String(lMetaData, 0)=0
llList2String(lMetaData, 1)=2048
llList2String(lBody, 0)=<!doctype html>
llList2String(lBody, 1)=<!-- Server: sfs-consume-7 -->
llList2String(lBody, 2)=<!--[if lt IE 7 ]> <html lang="en" class="no-js ie6" > <![endif]-->
llList2String(lBody, 3)=<!--[if IE 7 ]>    <html lang="en" class="no-js ie7" > <![endif]-->
llList2String(lBody, 4)=<!--[if IE 8 ]>    <html lang="en" class="no-js ie8" > <![endif]-->
llList2String(lBody, 5)=<!--[if IE 9 ]>    <html lang="en" class="no-js ie9" > <![endif]-->
llList2String(lBody, 6)=<!--[if (gt IE 9)|!(IE)]>--> <html lang="en" class="no-js"> <!--<![endif]-->
llList2String(lBody, 7)=    <head>
llList2String(lBody, 8)=        <meta charset="utf-8">
llList2String(lBody, 9)=        
llList2String(lBody, 10)=        <meta id="webtracker" name="webtracker" content='{"event_id": "ea71f064-ca28-11e1-98cc-0019b9f0e8fc"}'>
llList2String(lBody, 11)=        <meta name="description" content="Free, secure and fast downloads from the largest Open Source applications and software directory - SourceForge.net">
llList2String(lBody, 12)=        <meta name="keywords" content="Open Source, Open Source Software, Development, Community, Source Code, Secure,  Downloads, Free Software">
llList2String(lBody, 13)=<meta name="msvalidate.01" content="0279349BB9CF7ACA882F86F29C50D3EA" />
llList2String(lBody, 14)=        <meta name="viewport" content="width=device-width, initial-scale=1.0">
llList2String(lBody, 15)=        <title>SourceForge - Download, Develop and Publish Free Open Source Software</title>
llList2String(lBody, 16)=        <link rel="shortcut icon" href="http://a.fsdn.com/con/img/sftheme/favicon.ico">
...   ...   ...   ...   ...   ...   ...   ...   ...   ...   ...   ...   ...   ...

[edit] Maple

 
content := URL:-Get( "https://www.google.ca/" );
 

[edit] Mathematica

Straight forward "Import" task. More complicated secure web access can be done using J/Link; essentially a link to Java API.

 
content=Import["https://sourceforge.net", "HTML"]
 

[edit] MATLAB / Octave

s=urlread('https://sourceforge.net/')

[edit] Nemerle

This example is essentially identical to the HTTP task because the WebClient object can be used with http:, https:, ftp: and file: uri's.

using System;
using System.Console;
using System.Net;
using System.IO;
 
module HTTP
{
Main() : void
{
def wc = WebClient();
def myStream = wc.OpenRead(https://sourceforge.com);
def sr = StreamReader(myStream);
 
WriteLine(sr.ReadToEnd());
myStream.Close()
}
}

[edit] Nimrod

import httpclient
 
echo getContent "https://sourceforge.net"

[edit] Objeck

 
use HTTP;
 
class HttpsTest {
function : Main(args : String[]) ~ Nil {
client := HttpsClient->New();
lines := client->Get("https://sourceforge.net");
each(i : lines) {
lines->Get(i)->As(String)->PrintLine();
};
}
}
 

[edit] Perl

Library: LWP
 
use strict;
use LWP::UserAgent;
 
my $url = 'https://www.rosettacode.org';
my $response = LWP::UserAgent->new->get( $url );
 
$response->is_success or die "Failed to GET '$url': ", $response->status_line;
 
print $response->as_string;
 

[edit] PHP

 
echo file_get_contents('https://sourceforge.net');
 

[edit] PicoLisp

PicoLisp has no functionality for communicating with a HTTPS server (only for the other direction), but it is easy to use an external tool

 
(in '(curl "https://sourceforge.net") # Open a pipe to 'curl'
(out NIL (echo)) ) # Echo to standard output
 

[edit] Pike

 
int main() {
write("%s\n", Protocols.HTTP.get_url_data("https://sourceforge.net"));
}
 

[edit] PowerShell

 
$wc = New-Object Net.WebClient
$wc.DownloadString('https://sourceforge.net')
 

If the certificate could not be validated (untrusted, self-signed, expired), then an Exception is thrown with the message “The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.” so certificate validation is done automatically by the method.

[edit] Python

Python's urllib.request library, (urllib2 in Python2.x) has support for SSL if the interpreter's underlying httplib libraries were compiled with SSL support. By default this will be the enabled for default Python installations on most platforms.

Python 3.x:

 
from urllib.request import urlopen
print(urlopen('https://sourceforge.net/').read())
 

(Python 2.x)

 
from urllib2 import urlopen
print urlopen('https://sourceforge.net/').read()
 

[edit] R

Library: RCurl
Library: XML

The basic idea is to use getURL (as with HTTP_Request), but with some extra parameters.

library(RCurl)
webpage <- getURL("https://sourceforge.net/", .opts=list(followlocation=TRUE, ssl.verifyhost=FALSE, ssl.verifypeer=FALSE))

In this case, the webpage output contains unprocessed characters, e.g. \" instead of " and \\ instead of \, so we need to process the markup.

 
wp <- readLines(tc <- textConnection(webpage))
close(tc)
 

Finally, we parse the HTML and find the interesting bit.

 
pagetree <- htmlTreeParse(wp)
pagetree$children$html
 

[edit] Racket

 
#lang racket
(require net/url)
(copy-port (get-pure-port (string->url "https://www.google.com")
#:redirections 100)
(current-output-port))
 

[edit] REALbasic

REALBasic provides an HTTPSecureSocket class for handling HTTPS connections. The 'Get' method of the HTTPSecureSocket is overloaded and can download data to a file or return data as a string, in both cases an optional timeout argument can be passed.

 
Dim sock As New HTTPSecureSocket
Print(sock.Get("https://sourceforge.net", 10)) //set the timeout period to 10 seconds.
 

[edit] RLaB

See HTTP#RLaB

[edit] Ruby

This solution doesn't use the open-uri convenience package that the HTTP Request#Ruby solution uses: the Net::HTTP object must be told to use SSL before the session is started.

 
require 'net/https'
require 'uri'
require 'pp'
 
uri = URI.parse('https://sourceforge.net')
http = Net::HTTP.new(uri.host,uri.port)
http.use_ssl = true
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
 
http.start do
content = http.get("/")
p [content.code, content.message]
pp content.to_hash
puts content.body
end
 

outputs

["302", "Found"]
{"location"=>["http://sourceforge.net/"],
 "content-type"=>["text/html; charset=UTF-8"],
 "connection"=>["close"],
 "server"=>["nginx/0.7.60"],
 "date"=>["Sun, 30 Aug 2009 20:20:07 GMT"],
 "content-length"=>["229"],
 "set-cookie"=>
  ["sf.consume=89f65c6fadd222338b2f3de6f8e8a17b2c8f67c2gAJ9cQEoVQhfZXhwaXJlc3ECY2RhdGV0aW1lCmRhdGV0aW1lCnEDVQoH9gETAw4HAAAAhVJxBFUDX2lkcQVVIDEyOWI2MmVkOWMwMWYxYWZiYzE5Y2JhYzcwZDMxYTE4cQZVDl9hY2Nlc3NlZF90aW1lcQdHQdKmt73UN21VDl9jcmVhdGlvbl90aW1lcQhHQdKmt73UN2V1Lg==; expires=Tue, 19-Jan-2038 03:14:07 GMT; Path=/"]}
<html>
 <head>
  <title>302 Found</title>
 </head>
 <body>
  <h1>302 Found</h1>
  The resource was found at <a href="http://sourceforge.net/">http://sourceforge.net/</a>;
you should be redirected automatically.


 </body>
</html>

[edit] Scala

Library: Scala
import scala.io.Source
 
object HttpsTest extends App {
System.setProperty("http.agent", "*")
 
Source.fromURL("https://sourceforge.net").getLines.foreach(println)
}

[edit] Seed7

$ include "seed7_05.s7i";
include "gethttps.s7i";
include "utf8.s7i";
 
const proc: main is func
begin
writeln(STD_UTF8_OUT, getHttps("sourceforge.net"));
end func;

[edit] Tcl

Though Tcl's built-in http package does not understand SSL, it does support the registration of external handlers to accommodate additional protocols. This allows the use of the Tls package to supply the missing functionality with only a single line to complete the registration.

 
package require http
package require tls
 
# Tell the http package what to do with “https:” URLs.
#
# First argument is the protocol name, second the default port, and
# third the connection builder command
http::register "https" 443 ::tls::socket
 
# Make a secure connection, which is almost identical to normal
# connections except for the different protocol in the URL.
set token [http::geturl "https://sourceforge.net/"]
 
# Now as for conventional use of the “http” package
puts [http::data $token]
http::cleanup $token
 

[edit] TUSCRIPT

 
$$ MODE TUSCRIPT
SET DATEN = REQUEST ("https://sourceforge.net")
*{daten}
 

[edit] UNIX Shell

 
curl -k -s -L https://sourceforge.net/
 

[edit] VBScript

Based on code at How to retrieve HTML web pages with VBScript via the Microsoft.XmlHttp object

 
Option Explicit
 
Const sURL="https://sourceforge.net/"
 
Dim oHTTP
Set oHTTP = CreateObject("Microsoft.XmlHTTP")
 
On Error Resume Next
oHTTP.Open "GET", sURL, False
oHTTP.Send ""
If Err.Number = 0 Then
WScript.Echo oHTTP.responseText
Else
Wscript.Echo "error " & Err.Number & ": " & Err.Description
End If
 
Set oHTTP = Nothing
 

[edit] Visual Basic .NET

 
Imports System.Net
 
Dim client As WebClient = New WebClient()
Dim content As String = client.DownloadString("https://sourceforge.net")
Console.WriteLine(content)
 

[edit] zkl

Using the cURL library to do the heavy lifting:

zkl: var ZC=Import("zklCurl")
zkl: var data=ZC().get("https://sourceforge.net")
L(Data(36,265),826,0)

get returns the text of the response along with two counts: the bytes of header in front of the html code and the byte count of stuff after the end of the page. So, if you wanted to look at the header:

zkl: data[0][0,data[1]).text
HTTP/1.1 200 OK
Server: nginx
Date: Sun, 23 Mar 2014 07:36:51 GMT
Content-Type: text/html; charset=utf-8
Connection: close
...

or some of the html:

zkl: data[0][data[1],200).text
<!doctype html>
<!-- Server: sfs-consume-8 -->

<!--[if lt IE 7 ]> <html lang="en" class="no-js ie6"> <![endif]-->
<!--[if IE 7 ]>    <html lang="en" class="no-js ie7"> <![endif]-->
<!--[if IE 8 ]>   
Personal tools
Namespaces

Variants
Actions
Community
Explore
Misc
Toolbox