Merge pull request #2 from rembo10/master

sync upstream
This commit is contained in:
maxkoryukov
2016-02-01 15:37:45 +05:00
81 changed files with 3488 additions and 2031 deletions

View File

@@ -16,6 +16,5 @@ install:
- pip install pep8
script:
- pep8 headphones
- pylint --rcfile=pylintrc headphones
- pyflakes headphones
- nosetests headphones

4
API.md
View File

@@ -66,7 +66,7 @@ Unmark album as wanted / i.e. mark as skipped
### forceSearch
force search for wanted albums - not launched in a separate thread so it may take a bit to complete
### forceProcess
### forceProcess[&dir=/path/to/folder]
Force post process albums in download directory - also not launched in a separate thread
### forceActiveArtistsUpdate
force Active Artist Update - also not launched in a separate thread
@@ -108,4 +108,4 @@ See above.
Gives you a list of results from searcher.searchforalbum(). Basically runs a normal search, but rather than sorting them and downloading the best result, it dumps the data, which you can then pass on to download_specific_release(). Returns a list of dictionaries with params: title, size, url, provider & kind - all of these values must be passed back to download_specific_release
### download_specific_release&id=albumid&title=$title&size=$size&url=$url&provider=$provider&kind=$kind
Allows you to manually pass a choose_specific_download release back to searcher.send_to_downloader()
Allows you to manually pass a choose_specific_download release back to searcher.send_to_downloader()

View File

@@ -1,5 +1,25 @@
# Changelog
## v0.5.10
Released 29 January 2016
Highlights:
* Added: API option to post-process single folders
* Added: Ability to specify extension when re-encoding
* Added: Option to stop renaming folders
* Fixed: Utorrent torrents not being removed (#2385)
* Fixed: Torznab to transmission
* Fixed: Magnet folder names in history
* Fixed: Multiple torcache fixes
* Fixed: Updated requests & urllib3 to latest versions to fix errors with pyOpenSSL
* Improved: Use a temporary folder during post-processing
* Improved: Added verify_ssl_cert option
* Improved: Fixed track matching progress
* Improved: pylint, pep8 & pylint fixes
* Improved: Stop JS links from scrolling to the top of the page
The full list of commits can be found [here](https://github.com/rembo10/headphones/compare/v0.5.9...v0.5.10).
## v0.5.9
Released 05 September 2015
@@ -12,7 +32,7 @@ Highlights:
* Fixed: Pushover notifications
* Improved: Rutracker logging, switched to requests lib
The full list of commits can be found [here](https://github.com/rembo10/headphones/compare/v0.5.6...v0.5.7).
The full list of commits can be found [here](https://github.com/rembo10/headphones/compare/v0.5.8...v0.5.9).
## v0.5.8
Released 13 July 2015
@@ -30,7 +50,7 @@ Highlights:
* Improved: Set localhost as default
* Improved: Better single artist scanning
The full list of commits can be found [here](https://github.com/rembo10/headphones/compare/v0.5.6...v0.5.7).
The full list of commits can be found [here](https://github.com/rembo10/headphones/compare/v0.5.7...v0.5.8).
## v0.5.7
Released 01 July 2015

View File

@@ -1,4 +1,7 @@
#![Headphones Logo](https://github.com/rembo10/headphones/raw/master/data/images/headphoneslogo.png) Headphones
##![Headphones Logo](https://github.com/rembo10/headphones/raw/master/data/images/headphoneslogo.png) Headphones
**Master Branch:** [![Build Status](https://travis-ci.org/rembo10/headphones.svg?branch=master)](https://travis-ci.org/rembo10/headphones)
**Develop Branch:** [![Build Status](https://travis-ci.org/rembo10/headphones.svg?branch=develop)](https://travis-ci.org/rembo10/headphones)
Headphones is an automated music downloader for NZB and Torrent, written in Python. It supports SABnzbd, NZBget, Transmission, µTorrent and Blackhole.
@@ -18,8 +21,8 @@ You are free to join the Headphones support community on IRC where you can ask q
1. Analyze your log, you just might find the solution yourself!
2. You read the wiki and searched existing issues, but this is not solving your problem.
3. Post the issue with a clear title, description and the HP log and use [proper markdown syntax](https://help.github.com/articles/github-flavored-markdown) to structure your text (code/log in code blocks).
4. Close your issue when it's solved! If you found the solution yourself please comment so that others benefit from it.
3. Post the issue with a clear title, description and the HP log and use [proper markdown syntax](https://help.github.com/articles/github-flavored-markdown) to structure your text (code/log in code blocks).
4. Close your issue when it's solved! If you found the solution yourself, please comment so that others benefit from it.
**Feature requests** can be reported on the GitHub issue tracker too:

145
contrib/sni_test.py Normal file
View File

@@ -0,0 +1,145 @@
#!/usr/bin/env python
import os
import sys
# Ensure that we use the Headphones provided libraries.
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../lib"))
import urlparse
def can_import(module):
"""
Return True if a given module can be imported or not.
"""
try:
__import__(module)
except ImportError:
return False
# Module can be imported
return True
def check_installation():
"""
Check if some core modules are available. Info is based on this topic:
https://github.com/rembo10/headphones/issues/2210.
"""
if can_import("requests"):
import requests
requests_version = requests.__version__
else:
requests_version = "no"
if can_import("OpenSSL"):
import OpenSSL
openssl_version = OpenSSL.__version__
else:
openssl_version = "no"
if can_import("cryptography"):
import cryptography
cryptography_version = cryptography.__version__
else:
cryptography_version = "no"
if can_import("pyasn1"):
import pyasn1
pyasn1_version = pyasn1.__version__
else:
pyasn1_version = "no"
if can_import("ndg.httpsclient"):
from ndg import httpsclient
ndg_version = httpsclient.__date__
else:
ndg_version = "no"
# Print some system information.
sys.stdout.write(
"* Checking Python version: %s.%s.%s\n" % sys.version_info[:3])
sys.stdout.write("* Operating system: %s\n" % sys.platform)
sys.stdout.write(
"* Checking if requests can be imported: %s\n" % requests_version)
sys.stdout.write(
"* Checking if pyOpenSSL is installed: %s\n" % openssl_version)
sys.stdout.write(
"* Checking if cryptography is installed: %s\n" % cryptography_version)
sys.stdout.write(
"* Checking if pyasn1 is installed: %s\n" % pyasn1_version)
sys.stdout.write(
"* Checking if ndg.httpsclient is installed: %s\n" % ndg_version)
def main():
"""
Test if the current Headphones installation can connect to SNI-enabled
servers.
"""
# Read the URL to test.
if len(sys.argv) == 1:
url = "https://sni.velox.ch/"
else:
url = sys.argv[1]
# Check if it is a HTTPS website.
parts = urlparse.urlparse(url)
if parts.scheme.lower() != "https":
sys.stderr.write(
"Error: provided URL does not start with https://\n")
return 1
# Gather information
check_installation()
# Do the request.
if not can_import("requests"):
sys.stderr.exit("Error: cannot continue without requests module!\n")
return 1
sys.stdout.write("* Performing request: %s\n" % url)
import requests
requests.packages.urllib3.disable_warnings()
try:
try:
response = requests.get(url)
except requests.exceptions.SSLError as e:
sys.stdout.write(
"- Server certificate seems invalid. I will disable "
"certificate check and try again. You'll see the real "
"exception if it fails again.\n")
sys.stdout.write(
"* Retrying request with certificate verification off.\n")
response = requests.get(url)
except Exception as e:
sys.stdout.write(
"- An error occured while performing the request. The "
"exception was: %s\n" % e.message)
sys.stdout.write(
"- Consult the Troubleshooting wiki (https://github.com/"
"rembo10/headphones/wiki/Troubleshooting) before you post an "
"issue!")
return 0
# Verify the response.
if response.status_code == 200:
sys.stdout.write("+ Got a valid response. All seems OK!\n")
else:
sys.stdout.write(
"- Server returned status code %s. Expected a status code 200.\n",
response.status_code)
sys.stdout.write(
"- However, I was able to communicate to the server!\n")
# E.g. `python sni_test.py https://example.org'.
if __name__ == "__main__":
sys.exit(main())

View File

@@ -14,15 +14,15 @@
<a id="menu_link_delete" href="deleteAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}"><i class="fa fa-trash-o"></i> Delete Album</a>
%if album['Status'] == 'Skipped' or album['Status'] == 'Ignored':
<a id="menu_link_wanted" href="#" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}&new=False', $(this),true)" data-success="'${album['AlbumTitle']}' added to queue"><i class="fa fa-heart"></i> Mark Album as Wanted</a>
<a id="menu_link_wanted" href="javascript:void(0)" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}&new=False', $(this),true)" data-success="'${album['AlbumTitle']}' added to queue"><i class="fa fa-heart"></i> Mark Album as Wanted</a>
%elif album['Status'] == 'Wanted':
<a id="menu_link_check" href="#" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}&new=True', $(this));" data-success="Forced checking successful"><i class="fa fa-search"></i> Force Check</a>
<a id="menu_link_skipped" href="#" onclick="doAjaxCall('unqueueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}', $(this),true);" data-success="'${album['AlbumTitle']}' marked as Skipped"><i class="fa fa-step-forward"></i> Mark Album as Skipped</a>
<a id="menu_link_check" href="javascript:void(0)" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}&new=True', $(this));" data-success="Forced checking successful"><i class="fa fa-search"></i> Force Check</a>
<a id="menu_link_skipped" href="javascript:void(0)" onclick="doAjaxCall('unqueueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}', $(this),true);" data-success="'${album['AlbumTitle']}' marked as Skipped"><i class="fa fa-step-forward"></i> Mark Album as Skipped</a>
%else:
<a id="menu_link_retry" href="#" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}&new=False', $(this),true);" data-success="Retrying the same version of '${album['AlbumTitle']}'"><i class="fa fa-refresh"></i> Retry Download</a>
<a id="menu_link_new" href="#" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}&new=True', $(this),true);" data-success="Looking for a new version of '${album['AlbumTitle']}'"><i class="fa fa-download"></i> Try New Version</a>
<a id="menu_link_retry" href="javascript:void(0)" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}&new=False', $(this),true);" data-success="Retrying the same version of '${album['AlbumTitle']}'"><i class="fa fa-refresh"></i> Retry Download</a>
<a id="menu_link_new" href="javascript:void(0)" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}&new=True', $(this),true);" data-success="Looking for a new version of '${album['AlbumTitle']}'"><i class="fa fa-download"></i> Try New Version</a>
%endif
<a class="menu_link_edit" id="album_chooser" href="#"><i class="fa fa-pencil"></i> Choose Alternate Release</a>
<a class="menu_link_edit" id="album_chooser" href="javascript:void(0)"><i class="fa fa-pencil"></i> Choose Alternate Release</a>
<div id="dialog" title="Choose an Alternate Release" style="display:none" class="configtable">
<div class="links">
<%
@@ -30,7 +30,7 @@
%>
%if not alternate_albums:
<p>No alternate releases found. Try refreshing the artist (if the artist is being refreshed, please wait until it's finished)</p>
<h2><a id="refresh_artist" onclick="doAjaxCall('refreshArtist?ArtistID=${album['ArtistID']}', $(this)), true" href="#" data-success="'${album['ArtistName']}' is being refreshed">Refresh Artist</a></h2>
<h2><a id="refresh_artist" onclick="doAjaxCall('refreshArtist?ArtistID=${album['ArtistID']}', $(this)), true" href="javascript:void(0)" data-success="'${album['ArtistName']}' is being refreshed">Refresh Artist</a></h2>
%else:
%for alternate_album in alternate_albums:
<%
@@ -43,12 +43,12 @@
alternate_album_name = alternate_album['AlbumTitle'] + " (" + alternate_album['ReleaseCountry'] + ", " + str(alternate_album['ReleaseDate']) + ", " + alternate_album['ReleaseFormat'] + ") [" + str(have_track_count) + "/" + str(track_count) + " tracks]"
%>
<a href="#" onclick="doAjaxCall('switchAlbum?AlbumID=${album['AlbumID']}&ReleaseID=${alternate_album['ReleaseID']}', $(this), 'table');" data-success="Switched release to: ${alternate_album_name}">${alternate_album_name}</a><a href="${mb_link}" target="_blank">MB</a><br>
<a href="javascript:void(0)" onclick="doAjaxCall('switchAlbum?AlbumID=${album['AlbumID']}&ReleaseID=${alternate_album['ReleaseID']}', $(this), 'table');" data-success="Switched release to: ${alternate_album_name}">${alternate_album_name}</a><a href="${mb_link}" target="_blank">MB</a><br>
%endfor
%endif
</div>
</div>
<a class="menu_link_edit" id="edit_search_term" href="#"><i class="fa fa-pencil"></i> Edit Search Term</a>
<a class="menu_link_edit" id="edit_search_term" href="javascript:void(0)"><i class="fa fa-pencil"></i> Edit Search Term</a>
<div id="dialog2" title="Enter your own search term for this album" style="display:none" class="configtable">
<form action="editSearchTerm" method="GET" id="editSearchTerm">
<input type="hidden" name="AlbumID" value="${album['AlbumID']}">
@@ -64,7 +64,7 @@
<input type="button" value="Save changes" onclick="doAjaxCall('editSearchTerm',$(this),'tabs',true);return false;" data-success="Search term updated"/>
</form>
</div>
<a class="menu_link_edit" id="choose_specific_download" href="#" onclick="getAvailableDownloads()"><i class="fa fa-search"></i> Choose Specific Download</a>
<a class="menu_link_edit" id="choose_specific_download" href="javascript:void(0)" onclick="getAvailableDownloads()"><i class="fa fa-search"></i> Choose Specific Download</a>
<div id="choose_specific_download_dialog" title="Choose a specific download for this album" style="display:none" class="configtable">
<table class="display" id="downloads_table">
<thead>
@@ -238,7 +238,7 @@
feedback.fadeOut();
search_results = data
for( var i = 0, len = data.length; i < len; i++ ) {
$('#downloads_table_body').append('<tr><td id="title"><a href="#" onclick="downloadSpecificRelease('+i+')">'+data[i].title+'</a></td><td id="size"><span title='+data[i].size+'></span>'+(data[i].size / (1024*1024)).toFixed(2)+' MB</td><td id="provider">'+data[i].provider+'</td><td id="kind">'+data[i].kind+'</td><td id="matches">'+data[i].matches+'</td></tr>');
$('#downloads_table_body').append('<tr><td id="title"><a href="javascript:void(0)" onclick="downloadSpecificRelease('+i+')">'+data[i].title+'</a></td><td id="size"><span title='+data[i].size+'></span>'+(data[i].size / (1024*1024)).toFixed(2)+' MB</td><td id="provider">'+data[i].provider+'</td><td id="kind">'+data[i].kind+'</td><td id="matches">'+data[i].matches+'</td></tr>');
}
$('#downloads_table').dataTable({
"aoColumns": [

View File

@@ -8,19 +8,19 @@
<%def name="headerIncludes()">
<div id="subhead_container">
<div id="subhead_menu">
<a id="menu_link_refresh" onclick="doSimpleAjaxCall('refreshArtist?ArtistID=${artist['ArtistID']}')" href="#"><i class="fa fa-refresh"></i> Refresh Artist</a>
<a id="menu_link_refresh" onclick="doSimpleAjaxCall('refreshArtist?ArtistID=${artist['ArtistID']}')" href="javascript:void(0)"><i class="fa fa-refresh"></i> Refresh Artist</a>
<a id="menu_link_delete" href="deleteArtist?ArtistID=${artist['ArtistID']}"><i class="fa fa-trash-o"></i> Delete Artist</a>
<a id="menu_link_scan" onclick="doAjaxCall('scanArtist?ArtistID=${artist['ArtistID']}', $(this)),'table'" href="#" data-success="'${artist['ArtistName']}' was scanned"><i class="fa fa-refresh"></i> Scan Artist</a>
<a id="menu_link_scan" onclick="doAjaxCall('scanArtist?ArtistID=${artist['ArtistID']}', $(this)),'table'" href="javascript:void(0)" data-success="'${artist['ArtistName']}' was scanned"><i class="fa fa-refresh"></i> Scan Artist</a>
%if artist['Status'] == 'Paused':
<a id="menu_link_resume" href="#" onclick="doAjaxCall('resumeArtist?ArtistID=${artist['ArtistID']}',$(this),true)" data-success="${artist['ArtistName']} resumed"><i class="fa fa-play"></i> Resume Artist</a>
<a id="menu_link_resume" href="javascript:void(0)" onclick="doAjaxCall('resumeArtist?ArtistID=${artist['ArtistID']}',$(this),true)" data-success="${artist['ArtistName']} resumed"><i class="fa fa-play"></i> Resume Artist</a>
%else:
<a id="menu_link_pauze" href="#" onclick="doAjaxCall('pauseArtist?ArtistID=${artist['ArtistID']}',$(this),true)" data-success="${artist['ArtistName']} paused"><i class="fa fa-pause"></i> Pause Artist</a>
<a id="menu_link_pauze" href="javascript:void(0)" onclick="doAjaxCall('pauseArtist?ArtistID=${artist['ArtistID']}',$(this),true)" data-success="${artist['ArtistName']} paused"><i class="fa fa-pause"></i> Pause Artist</a>
%endif
%if artist['IncludeExtras']:
<a id="menu_link_removeextra" href="#" onclick="doAjaxCall('removeExtras?ArtistID=${artist['ArtistID']}&ArtistName=${artist['ArtistName']}',$(this),'submenu&table')" data-success="Extras removed for ${artist['ArtistName']}"><i class="fa fa-minus"></i> Remove Extras</a>
<a class="menu_link_edit" id="menu_link_modifyextra" href="#"><i class="fa fa-pencil"></i> Modify Extras</a>
<a id="menu_link_removeextra" href="javascript:void(0)" onclick="doAjaxCall('removeExtras?ArtistID=${artist['ArtistID']}&ArtistName=${artist['ArtistName']}',$(this),'submenu&table')" data-success="Extras removed for ${artist['ArtistName']}"><i class="fa fa-minus"></i> Remove Extras</a>
<a class="menu_link_edit" id="menu_link_modifyextra" href="javascript:void(0)"><i class="fa fa-pencil"></i> Modify Extras</a>
%else:
<a id="menu_link_getextra" href="#"><i class="fa fa-plus"></i> Get Extras</a>
<a id="menu_link_getextra" href="javascript:void(0)"><i class="fa fa-plus"></i> Get Extras</a>
%endif
<div id="dialog" title="Choose Which Extras to Fetch" style="display:none" class="configtable">
<form action="getExtras" method="get" class="form">
@@ -129,16 +129,16 @@
<td id="score">${album['CriticScore']}/${album['UserScore']}</td>
<td id="status">${album['Status']}
%if album['Status'] == 'Skipped' or album['Status'] == 'Ignored':
[<a href="#" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}',$(this),'table')" data-success="'${album['AlbumTitle']}' added to Wanted list">want</a>]
[<a href="javascript:void(0)" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}',$(this),'table')" data-success="'${album['AlbumTitle']}' added to Wanted list">want</a>]
%elif (album['Status'] == 'Wanted' or album['Status'] == 'Wanted Lossless'):
[<a href="#" onclick="doAjaxCall('unqueueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}',$(this),'table')" data-success="'${album['AlbumTitle']}' skipped">skip</a>] [<a href="#" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}', $(this),'table')" data-success="Trying to download'${album['AlbumTitle']}'" title="Search if available for download">search</a>]
[<a href="javascript:void(0)" onclick="doAjaxCall('unqueueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}',$(this),'table')" data-success="'${album['AlbumTitle']}' skipped">skip</a>] [<a href="javascript:void(0)" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}', $(this),'table')" data-success="Trying to download'${album['AlbumTitle']}'" title="Search if available for download">search</a>]
%else:
[<a href="#" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}', $(this),'table')" data-success="Retrying the same version of '${album['AlbumTitle']}'" title="Retry the same download again">retry</a>][<a href="#" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}&new=True', $(this),'table')" title="Try a new download, skipping all previously tried nzbs" data-success="Downloading new version for '${album['AlbumTitle']}'" data-success="Looking for a new version of '${album['AlbumTitle']}'">new</a>]
[<a href="javascript:void(0)" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}', $(this),'table')" data-success="Retrying the same version of '${album['AlbumTitle']}'" title="Retry the same download again">retry</a>][<a href="javascript:void(0)" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}&new=True', $(this),'table')" title="Try a new download, skipping all previously tried nzbs" data-success="Downloading new version for '${album['AlbumTitle']}'" data-success="Looking for a new version of '${album['AlbumTitle']}'">new</a>]
%endif
%if albumformat in lossy_formats and album['Status'] == 'Skipped':
[<a id="wantlossless" href="#" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}&lossless=True', $(this),'table')" data-success="Lossless version of '${album['AlbumTitle']}' added to queue">want lossless</a>]
[<a id="wantlossless" href="javascript:void(0)" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}&lossless=True', $(this),'table')" data-success="Lossless version of '${album['AlbumTitle']}' added to queue">want lossless</a>]
%elif albumformat in lossy_formats and (album['Status'] == 'Snatched' or album['Status'] == 'Downloaded'):
[<a id="wantlossless" href="#" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}&lossless=True', $(this),'table')" data-success="Retrying the same lossless version of '${album['AlbumTitle']}'">retry lossless</a>]
[<a id="wantlossless" href="javascript:void(0)" onclick="doAjaxCall('queueAlbum?AlbumID=${album['AlbumID']}&ArtistID=${album['ArtistID']}&lossless=True', $(this),'table')" data-success="Retrying the same lossless version of '${album['AlbumTitle']}'">retry lossless</a>]
%endif
</td>
<td id="have"><span title="${percent}"><span><div class="progress-container"><div style="width:${percent}%"><div class="havetracks">${havetracks}/${totaltracks}</div></div></div></td>

View File

@@ -34,11 +34,11 @@
% if headphones.CONFIG.CHECK_GITHUB and not headphones.CURRENT_VERSION:
<div id="updatebar">
You're running an unknown version of Headphones. <a href="update">Update</a> or
<a href="#" onclick="$('#updatebar').slideUp('slow');">Close</a>
<a href="javascript:void(0)" onclick="$('#updatebar').slideUp('slow');">Close</a>
</div>
% elif headphones.CONFIG.CHECK_GITHUB and headphones.CURRENT_VERSION != headphones.LATEST_VERSION and headphones.COMMITS_BEHIND > 0 and headphones.INSTALL_TYPE != 'win':
<div id="updatebar">
A <a href="https://github.com/${headphones.CONFIG.GIT_USER}/headphones/compare/${headphones.CURRENT_VERSION}...${headphones.LATEST_VERSION}"> newer version</a> is available. You're ${headphones.COMMITS_BEHIND} commits behind. <a href="update">Update</a> or <a href="#" onclick="$('#updatebar').slideUp('slow');">Close</a>
A <a href="https://github.com/${headphones.CONFIG.GIT_USER}/headphones/compare/${headphones.CURRENT_VERSION}...${headphones.LATEST_VERSION}"> newer version</a> is available. You're ${headphones.COMMITS_BEHIND} commits behind. <a href="update">Update</a> or <a href="javascript:void(0)" onclick="$('#updatebar').slideUp('slow');">Close</a>
</div>
% endif
@@ -89,7 +89,7 @@
<small>
<a href="shutdown"><i class="fa fa-power-off"></i> Shutdown</a> |
<a href="restart"><i class="fa fa-power-off"></i> Restart</a> |
<a href="#" onclick="doAjaxCall('checkGithub',$(this))" data-success="Checking for update successful" data-error="Error checking for update"><i class="fa fa-refresh"></i> Check for new version</a>
<a href="javascript:void(0)" onclick="doAjaxCall('checkGithub',$(this))" data-success="Checking for update successful" data-error="Error checking for update"><i class="fa fa-refresh"></i> Check for new version</a>
</small>
</div>
<div id="version">

View File

@@ -336,9 +336,14 @@
<input type="radio" name="magnet_links" id="magnet_links_2" value="2" ${config['magnet_links_2']}>
Convert
</label>
<label class="inline">
<input type="radio" name="magnet_links" id="magnet_links_3" value="3" ${config['magnet_links_3']}>
Embed
</label>
<div style="clear: both"></div>
<small>Note: opening magnet URL's is not suitable for headless/console/terminal servers.</small>
<small>Note: Opening magnet URLs is not suitable for headless/console/terminal servers.<br />Embed only works for rTorrent.</small>
</div>
</fieldset>
<fieldset id="transmission_options">
@@ -1402,6 +1407,11 @@
<input type="text" name="xldprofile" value="${config['xldprofile']}" size="43">
</div>
</div>
<div class="row">
<label>Extension</label>
<input type="text" name="encoderoutputformat" value="${config['encoderoutputformat']}" size="43">
<small>If different from format selected above</small>
</div>
<div class="row">
<label>Path to Encoder</label>
<input type="text" name="encoder_path" value="${config['encoder_path']}" size="43">
@@ -1423,7 +1433,7 @@
else:
which_extras_selected = "None"
%>
<small>Currently Selected: ${which_extras_selected} <a href="#" id="modify_extras">(Change)</a></small></label>
<small>Currently Selected: ${which_extras_selected} <a href="javascript:void(0)" id="modify_extras">(Change)</a></small></label>
<div id="dialog" title="Choose Which Extras to Include" style="display:none" class="configtable">
%for extra in config['extras']:
<input type="checkbox" id="${extra}_temp" name="${extra}_temp" value="1" ${config['extras'][extra]} />${string.capwords(extra)}<br>

View File

@@ -334,7 +334,7 @@ form .row label {
font-size: 12px;
line-height: normal;
padding-top: 7px;
width: 175px;
width: 170px;
}
form .row label.inline {
margin-right: 5px;
@@ -368,7 +368,7 @@ form .row small {
display: block;
font-size: 9px;
line-height: 12px;
margin-left: 175px;
margin-left: 170px;
margin-top: 3px;
}
form .left label {

View File

@@ -190,7 +190,7 @@ form {
font-size: 12px;
line-height: normal;
padding-top: 7px;
width: 175px;
width: 170px;
&.inline {
margin-right: 5px;
@@ -216,7 +216,7 @@ form {
display: block;
font-size: 9px;
line-height: 12px;
margin-left: 175px;
margin-left: 170px;
margin-top: 3px;
}
}

View File

@@ -7,11 +7,11 @@
<%def name="headerIncludes()">
<div id="subhead_container">
<div id="subhead_menu">
<a id="menu_link_delete" href="#" onclick="doAjaxCall('clearhistory?type=all',$(this),'table')" data-success="All History cleared"><i class="fa fa-trash-o"></i> Clear All History</a>
<a id="menu_link_delete" href="#" onclick="doAjaxCall('clearhistory?type=Processed',$(this),'table')" data-success="All Processed cleared"><i class="fa fa-trash-o"></i> Clear Processed</a>
<a id="menu_link_delete" href="#" onclick="doAjaxCall('clearhistory?type=Unprocessed',$(this),'table')" data-success="All Unprocessed cleared"><i class="fa fa-trash-o"></i> Clear Unprocessed</a>
<a id="menu_link_delete" href="#" onclick="doAjaxCall('clearhistory?type=Frozen',$(this),'table')" data-success="All Frozen cleared"><i class="fa fa-trash-o"></i> Clear Frozen</a>
<a id="menu_link_delete" href="#" onclick="doAjaxCall('clearhistory?type=Snatched',$(this),'table')" data-success="All Snatched cleared"><i class="fa fa-trash-o"></i> Clear Snatched</a>
<a id="menu_link_delete" href="javascript:void(0)" onclick="doAjaxCall('clearhistory?type=all',$(this),'table')" data-success="All History cleared"><i class="fa fa-trash-o"></i> Clear All History</a>
<a id="menu_link_delete" href="javascript:void(0)" onclick="doAjaxCall('clearhistory?type=Processed',$(this),'table')" data-success="All Processed cleared"><i class="fa fa-trash-o"></i> Clear Processed</a>
<a id="menu_link_delete" href="javascript:void(0)" onclick="doAjaxCall('clearhistory?type=Unprocessed',$(this),'table')" data-success="All Unprocessed cleared"><i class="fa fa-trash-o"></i> Clear Unprocessed</a>
<a id="menu_link_delete" href="javascript:void(0)" onclick="doAjaxCall('clearhistory?type=Frozen',$(this),'table')" data-success="All Frozen cleared"><i class="fa fa-trash-o"></i> Clear Frozen</a>
<a id="menu_link_delete" href="javascript:void(0)" onclick="doAjaxCall('clearhistory?type=Snatched',$(this),'table')" data-success="All Snatched cleared"><i class="fa fa-trash-o"></i> Clear Snatched</a>
</div>
</div>
</%def>
@@ -50,6 +50,8 @@
fileid = 'nzb'
if item['URL'].find('torrent') != -1:
fileid = 'torrent'
if item['URL'].find('magnet:') != -1:
fileid = 'torrent'
if item['URL'].find('rutracker') != -1:
fileid = 'torrent'
if item['URL'].find('codeshy') != -1:
@@ -63,8 +65,8 @@
<td id="filename">${cgi.escape(item['Title'], quote=True)} [<a href="${item['URL']}">${fileid}</a>]<a href="albumPage?AlbumID=${item['AlbumID']}">[album page]</a></td>
<td id="size">${helpers.bytes_to_mb(item['Size'])}</td>
<td title="${folder}" id="status">${item['Status']}</td>
<td id="action">[<a href="#" onclick="doAjaxCall('queueAlbum?AlbumID=${item['AlbumID']}&redirect=history', $(this),'table')" data-success="Retrying download of '${cgi.escape(item['Title'], quote=True)}'">retry</a>][<a href="#" onclick="doAjaxCall('queueAlbum?AlbumID=${item['AlbumID']}&new=True&redirect=history',$(this),'table')" data-success="Looking for a new version of '${cgi.escape(item['Title'], quote=True)}'">new</a>]</td>
<td id="delete"><a href="#" onclick="doAjaxCall('clearhistory?date_added=${item['DateAdded']}&title=${cgi.escape(item['Title'], quote=True)}',$(this),'table')" data-success="${cgi.escape(item['Title'], quote=True)} cleared from history"><img src="interfaces/default/images/trashcan.png" height="18" width="18" id="trashcan" title="Clear this item from the history"></a>
<td id="action">[<a href="javascript:void(0)" onclick="doAjaxCall('queueAlbum?AlbumID=${item['AlbumID']}&redirect=history', $(this),'table')" data-success="Retrying download of '${cgi.escape(item['Title'], quote=True)}'">retry</a>][<a href="javascript:void(0)" onclick="doAjaxCall('queueAlbum?AlbumID=${item['AlbumID']}&new=True&redirect=history',$(this),'table')" data-success="Looking for a new version of '${cgi.escape(item['Title'], quote=True)}'">new</a>]</td>
<td id="delete"><a href="javascript:void(0)" onclick="doAjaxCall('clearhistory?date_added=${item['DateAdded']}&title=${cgi.escape(item['Title'], quote=True)}',$(this),'table')" data-success="${cgi.escape(item['Title'], quote=True)} cleared from history"><img src="interfaces/default/images/trashcan.png" height="18" width="18" id="trashcan" title="Clear this item from the history"></a>
</tr>
%endfor
</tbody>

View File

@@ -6,7 +6,7 @@
<%def name="headerIncludes()">
<div id="subhead_container">
<div id="subhead_menu">
<a class="menu_link_edit" id="manage_albums" href="#"><i class="fa fa-pencil"></i> Manage Albums</a>
<a class="menu_link_edit" id="manage_albums" href="javascript:void(0)"><i class="fa fa-pencil"></i> Manage Albums</a>
<div id="dialog" title="Choose Album Filter" style="display:none" class="configtable">
<div class="links">
<a href="manageAlbums?Status=Downloaded"><i class="fa fa-check fa-fw"></i> Manage Downloaded Albums</a><br>
@@ -90,7 +90,7 @@
%>
<input type="text" value="${lastfmvalue}" placeholder="Last.fm username" onfocus="if
(this.value==this.defaultValue) this.value='';" name="username" id="username" size="18" />
<a href="#" onclick="doAjaxCall('importLastFM?username=',$(this),'tabs');return false;" data-success="Last.fm username has been reset"><i class="fa fa-reply"></i> Reset username</a>
<a href="javascript:void(0)" onclick="doAjaxCall('importLastFM?username=',$(this),'tabs');return false;" data-success="Last.fm username has been reset"><i class="fa fa-reply"></i> Reset username</a>
</div>
</fieldset>
<input type="button" value="Save changes" onclick="doAjaxCall('importLastFM',$(this),'tabs',true);return false;" data-success="Last.fm artists will be imported" data-error="Fill in a last.fm username"/>
@@ -121,10 +121,10 @@
<fieldset>
<legend>Force Search</legend>
<div class="links">
<a href="#" onclick="doAjaxCall('forceSearch',$(this))" data-success="Checking for wanted albums successful" data-error="Error checking wanted albums"><i class="fa fa-search fa-fw"></i> Force Check for Wanted Albums</a>
<a href="#" onclick="doAjaxCall('forceUpdate',$(this))" data-success="Update active artists successful" data-error="Error forcing update artists"><i class="fa fa-heart fa-fw"></i> Force Update Active Artists [Fast]</a>
<a href="#" onclick="doAjaxCall('checkGithub',$(this))" data-success="Checking for update successful" data-error="Error checking for update"><i class="fa fa-refresh fa-fw"></i> Check for Headphones Updates</a>
<a href="#" id="delete_empty_artists"><i class="fa fa-trash-o fa-fw"></i> Delete empty Artists</a>
<a href="javascript:void(0)" onclick="doAjaxCall('forceSearch',$(this))" data-success="Checking for wanted albums successful" data-error="Error checking wanted albums"><i class="fa fa-search fa-fw"></i> Force Check for Wanted Albums</a>
<a href="javascript:void(0)" onclick="doAjaxCall('forceUpdate',$(this))" data-success="Update active artists successful" data-error="Error forcing update artists"><i class="fa fa-heart fa-fw"></i> Force Update Active Artists [Fast]</a>
<a href="javascript:void(0)" onclick="doAjaxCall('checkGithub',$(this))" data-success="Checking for update successful" data-error="Error checking for update"><i class="fa fa-refresh fa-fw"></i> Check for Headphones Updates</a>
<a href="javascript:void(0)" id="delete_empty_artists"><i class="fa fa-trash-o fa-fw"></i> Delete empty Artists</a>
<div id="emptyartistdialog" title="Confirm Artist Deletion" style="display:none" class="configtable">
%if emptyArtists:
<h3>The following artists will be deleted:</h3>
@@ -138,7 +138,7 @@
%endif
</div>
<div id="post_process">
<a href="#" class="btnOpenDialog"><i class="fa fa-wrench fa-fw"></i> Force Post-Process Albums in Download Folder</a>
<a href="javascript:void(0)" class="btnOpenDialog"><i class="fa fa-wrench fa-fw"></i> Force Post-Process Albums in Download Folder</a>
</div>
</div>
</fieldset>
@@ -166,9 +166,9 @@
<legend>Force Legacy</legend>
<p>Please note that these functions will take a significant amount of time to complete.</p>
<div class="links">
<a href="#" onclick="doAjaxCall('forceFullUpdate',$(this))" data-success="Update active artists successful" data-error="Error forcing update artists"><i class="fa fa-heart fa-fw"></i> Force Update Active Artists [Comprehensive]</a>
<a href="javascript:void(0)" onclick="doAjaxCall('forceFullUpdate',$(this))" data-success="Update active artists successful" data-error="Error forcing update artists"><i class="fa fa-heart fa-fw"></i> Force Update Active Artists [Comprehensive]</a>
<BR>
<a href="#" onclick="doAjaxCall('forceScan',$(this))" data-success="Library scan successful" data-error="Error forcing library scan"><i class="fa fa-refresh fa-fw"></i> Force Re-scan Library [Comprehensive]</a>
<a href="javascript:void(0)" onclick="doAjaxCall('forceScan',$(this))" data-success="Library scan successful" data-error="Error forcing library scan"><i class="fa fa-refresh fa-fw"></i> Force Re-scan Library [Comprehensive]</a>
<BR>
<small>*Warning: If you choose [Force Re-scan Library], any manually ignored/matched artists/albums will be reset to "unmatched".</small>

View File

@@ -43,9 +43,9 @@
<tr><td>Are you sure you want to reset Local Artist: ${album['ArtistName']} to unmatched?</td></tr>
<tr><td align="right"><BR>
%if album['AlbumStatus'] == "Ignored":
<button href="#" onclick="doAjaxCall('markManual?action=unignoreArtist&existing_artist=${old_artist_clean}', $(this), 'page');" data-success="Successfully reset ${album['ArtistName']} to unmatched">Reset Artist</button>
<button href="javascript:void(0)" onclick="doAjaxCall('markManual?action=unignoreArtist&existing_artist=${old_artist_clean}', $(this), 'page');" data-success="Successfully reset ${album['ArtistName']} to unmatched">Reset Artist</button>
%elif album['AlbumStatus'] == "Matched":
<button href="#" onclick="doAjaxCall('markManual?action=unmatchArtist&existing_artist=${old_artist_clean}', $(this), 'page');" data-success="Successfully restored ${album['ArtistName']} to unmatched">Reset Artist</button>
<button href="javascript:void(0)" onclick="doAjaxCall('markManual?action=unmatchArtist&existing_artist=${old_artist_clean}', $(this), 'page');" data-success="Successfully restored ${album['ArtistName']} to unmatched">Reset Artist</button>
%endif
</td></tr>
</table>
@@ -58,9 +58,9 @@
<tr><td>Are you sure you want to reset Local Album: ${album['AlbumTitle']} to unmatched?</td></tr>
<tr><td align="right"><BR>
%if album['AlbumStatus'] == "Ignored":
<button href="#" onclick="doAjaxCall('markManual?action=unignoreAlbum&existing_artist=${old_artist_clean}&existing_album=${old_album_clean}', $(this), 'page');" data-success="Successfully reset ${album['AlbumTitle']} to unmatched">Reset Album</button>
<button href="javascript:void(0)" onclick="doAjaxCall('markManual?action=unignoreAlbum&existing_artist=${old_artist_clean}&existing_album=${old_album_clean}', $(this), 'page');" data-success="Successfully reset ${album['AlbumTitle']} to unmatched">Reset Album</button>
%elif album['AlbumStatus'] == "Matched":
<button href="#" onclick="doAjaxCall('markManual?action=unmatchAlbum&existing_artist=${old_artist_clean}&existing_album=${old_album_clean}', $(this), 'page');" data-success="Successfully reset ${album['AlbumTitle']} to unmatched">Reset Album</button>
<button href="javascript:void(0)" onclick="doAjaxCall('markManual?action=unmatchAlbum&existing_artist=${old_artist_clean}&existing_album=${old_album_clean}', $(this), 'page');" data-success="Successfully reset ${album['AlbumTitle']} to unmatched">Reset Album</button>
%endif
</td></tr>
</table>

View File

@@ -52,7 +52,7 @@
<table>
<tr><td>Are you sure you want to ignore Local Artist: ${album['ArtistName']} from future matching?</td></tr>
<tr><td align="right"><BR>
<button href="#" onclick="doAjaxCall('markUnmatched?action=ignoreArtist&existing_artist=${old_artist_clean}', $(this), 'page');" data-success="Successfully ignored ${album['ArtistName']} from future matching">Ignore Artist</button>
<button href="javascript:void(0)" onclick="doAjaxCall('markUnmatched?action=ignoreArtist&existing_artist=${old_artist_clean}', $(this), 'page');" data-success="Successfully ignored ${album['ArtistName']} from future matching">Ignore Artist</button>
</td></tr>
</table>
</div>
@@ -66,7 +66,7 @@
</select>
</td></tr>
<tr><td></td><td align="right"><BR>
<button href="#" onclick="artist_matcher(${count_albums}, '${old_artist_js}')">Match Artist</button>
<button href="javascript:void(0)" onclick="artist_matcher(${count_albums}, '${old_artist_js}')">Match Artist</button>
</td></tr>
</table>
</div>
@@ -78,7 +78,7 @@
<table>
<tr><td>Are you sure you want to ignore Local Album: ${album['AlbumTitle']} from future matching?</td></tr>
<tr><td align="right"><BR>
<button href="#" onclick="doAjaxCall('markUnmatched?action=ignoreAlbum&existing_artist=${old_artist_clean}&existing_album=${old_album_clean}', $(this), 'page');" data-success="Successfully ignored ${album['AlbumTitle']} from future matching">Ignore Album</button>
<button href="javascript:void(0)" onclick="doAjaxCall('markUnmatched?action=ignoreAlbum&existing_artist=${old_artist_clean}&existing_album=${old_album_clean}', $(this), 'page');" data-success="Successfully ignored ${album['AlbumTitle']} from future matching">Ignore Album</button>
</td></tr>
</table>
</div>
@@ -97,7 +97,7 @@
</select>
</td></tr>
<tr><td></td><td align="right"><BR>
<button href="#" onclick="album_matcher(${count_albums}, '${old_artist_js}', '${old_album_js}')">Match Album</button>
<button href="javascript:void(0)" onclick="album_matcher(${count_albums}, '${old_artist_js}', '${old_album_js}')">Match Album</button>
</td></tr>
</table>
</div>

View File

@@ -3,7 +3,7 @@
<%def name="headerIncludes()">
<div id="subhead_container">
<div id="subhead_menu">
<a href="#" id="menu_link_scan" onclick="doAjaxCall('forceSearch',$(this))" data-success="Checking for wanted albums successful" data-error="Error checking wanted albums"><i class="fa fa-search"></i> Force Check</a>
<a href="javascript:void(0)" id="menu_link_scan" onclick="doAjaxCall('forceSearch',$(this))" data-success="Checking for wanted albums successful" data-error="Error checking wanted albums"><i class="fa fa-search"></i> Force Check</a>
</div>
</div>
</%def>

View File

@@ -16,21 +16,21 @@
# NZBGet support added by CurlyMo <curlymoo1@gmail.com> as a part of
# XBian - XBMC on the Raspberry Pi
import os
import sys
import subprocess
import threading
import webbrowser
import sqlite3
import cherrypy
import datetime
import os
import cherrypy
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.interval import IntervalTrigger
from headphones import versioncheck, logger
import headphones.config
# (append new extras to the end)
POSSIBLE_EXTRAS = [
"single",
@@ -94,7 +94,6 @@ UMASK = None
def initialize(config_file):
with INIT_LOCK:
global CONFIG
@@ -131,11 +130,11 @@ def initialize(config_file):
if not QUIET:
sys.stderr.write("Unable to create the log directory. " \
"Logging to screen only.\n")
"Logging to screen only.\n")
# Start the logger, disable console if needed
logger.initLogger(console=not QUIET, log_dir=CONFIG.LOG_DIR,
verbose=VERBOSE)
verbose=VERBOSE)
if not CONFIG.CACHE_DIR:
# Put the cache dir in the data dir for now
@@ -246,7 +245,6 @@ def daemonize():
def launch_browser(host, port, root):
if host == '0.0.0.0':
host = 'localhost'
@@ -287,17 +285,19 @@ def initialize_scheduler():
hours = CONFIG.UPDATE_DB_INTERVAL
schedule_job(updater.dbUpdate, 'MusicBrainz Update', hours=hours, minutes=0)
#Update check
# Update check
if CONFIG.CHECK_GITHUB:
if CONFIG.CHECK_GITHUB_INTERVAL:
minutes = CONFIG.CHECK_GITHUB_INTERVAL
else:
minutes = 0
schedule_job(versioncheck.checkGithub, 'Check GitHub for updates', hours=0, minutes=minutes)
schedule_job(versioncheck.checkGithub, 'Check GitHub for updates', hours=0,
minutes=minutes)
# Remove Torrent + data if Post Processed and finished Seeding
minutes = CONFIG.TORRENT_REMOVAL_INTERVAL
schedule_job(torrentfinished.checkTorrentFinished, 'Torrent removal check', hours=0, minutes=minutes)
schedule_job(torrentfinished.checkTorrentFinished, 'Torrent removal check', hours=0,
minutes=minutes)
# Start scheduler
if start_jobs and len(SCHED.get_jobs()):
@@ -306,8 +306,8 @@ def initialize_scheduler():
except Exception as e:
logger.info(e)
# Debug
#SCHED.print_jobs()
# Debug
# SCHED.print_jobs()
def schedule_job(function, name, hours=0, minutes=0):
@@ -334,7 +334,6 @@ def schedule_job(function, name, hours=0, minutes=0):
def start():
global started
if _INITIALIZED:
@@ -349,7 +348,6 @@ def sig_handler(signum=None, frame=None):
def dbcheck():
conn = sqlite3.connect(DB_FILE)
c = conn.cursor()
c.execute(
@@ -609,7 +607,6 @@ def dbcheck():
def shutdown(restart=False, update=False):
cherrypy.engine.exit()
SCHED.shutdown(wait=False)

View File

@@ -53,7 +53,6 @@ def switch(AlbumID, ReleaseID):
c.get_artwork_from_cache(AlbumID=AlbumID)
for track in newtrackdata:
controlValueDict = {"TrackID": track['TrackID'],
"AlbumID": AlbumID}
@@ -79,15 +78,18 @@ def switch(AlbumID, ReleaseID):
have_track_count = len(myDB.select(
'SELECT * from tracks WHERE AlbumID=? AND Location IS NOT NULL', [AlbumID]))
if oldalbumdata['Status'] == 'Skipped' and ((have_track_count / float(total_track_count)) >= (headphones.CONFIG.ALBUM_COMPLETION_PCT / 100.0)):
if oldalbumdata['Status'] == 'Skipped' and ((have_track_count / float(total_track_count)) >= (
headphones.CONFIG.ALBUM_COMPLETION_PCT / 100.0)):
myDB.action(
'UPDATE albums SET Status=? WHERE AlbumID=?', ['Downloaded', AlbumID])
# Update have track counts on index
totaltracks = len(myDB.select(
'SELECT TrackTitle from tracks WHERE ArtistID=? AND AlbumID IN (SELECT AlbumID FROM albums WHERE Status != "Ignored")', [newalbumdata['ArtistID']]))
'SELECT TrackTitle from tracks WHERE ArtistID=? AND AlbumID IN (SELECT AlbumID FROM albums WHERE Status != "Ignored")',
[newalbumdata['ArtistID']]))
havetracks = len(myDB.select(
'SELECT TrackTitle from tracks WHERE ArtistID=? AND Location IS NOT NULL', [newalbumdata['ArtistID']]))
'SELECT TrackTitle from tracks WHERE ArtistID=? AND Location IS NOT NULL',
[newalbumdata['ArtistID']]))
controlValueDict = {"ArtistID": newalbumdata['ArtistID']}

View File

@@ -13,21 +13,25 @@
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
from headphones import db, mb, updater, importer, searcher, cache, postprocessor, versioncheck, logger
import headphones
import json
cmd_list = ['getIndex', 'getArtist', 'getAlbum', 'getUpcoming', 'getWanted', 'getSnatched', 'getSimilar', 'getHistory', 'getLogs',
'findArtist', 'findAlbum', 'addArtist', 'delArtist', 'pauseArtist', 'resumeArtist', 'refreshArtist',
'addAlbum', 'queueAlbum', 'unqueueAlbum', 'forceSearch', 'forceProcess', 'forceActiveArtistsUpdate',
'getVersion', 'checkGithub', 'shutdown', 'restart', 'update', 'getArtistArt', 'getAlbumArt',
from headphones import db, mb, updater, importer, searcher, cache, postprocessor, versioncheck, \
logger
import headphones
cmd_list = ['getIndex', 'getArtist', 'getAlbum', 'getUpcoming', 'getWanted', 'getSnatched',
'getSimilar', 'getHistory', 'getLogs',
'findArtist', 'findAlbum', 'addArtist', 'delArtist', 'pauseArtist', 'resumeArtist',
'refreshArtist',
'addAlbum', 'queueAlbum', 'unqueueAlbum', 'forceSearch', 'forceProcess',
'forceActiveArtistsUpdate',
'getVersion', 'checkGithub', 'shutdown', 'restart', 'update', 'getArtistArt',
'getAlbumArt',
'getArtistInfo', 'getAlbumInfo', 'getArtistThumb', 'getAlbumThumb', 'clearLogs',
'choose_specific_download', 'download_specific_release']
class Api(object):
def __init__(self):
self.apikey = None
@@ -170,7 +174,7 @@ class Api(object):
self.data = self._dic_from_query(
"SELECT * from albums WHERE Status='Snatched'")
return
def _getSimilar(self, **kwargs):
self.data = self._dic_from_query('SELECT * from lastfmcloud')
return
@@ -328,10 +332,15 @@ class Api(object):
searcher.searchforalbum()
def _forceProcess(self, **kwargs):
self.dir = None
if 'dir' in kwargs:
if 'album_dir' in kwargs:
album_dir = kwargs['album_dir']
dir = None
postprocessor.forcePostProcess(self, dir, album_dir)
elif 'dir' in kwargs:
self.dir = kwargs['dir']
postprocessor.forcePostProcess(self.dir)
postprocessor.forcePostProcess(self.dir)
else:
postprocessor.forcePostProcess()
def _forceActiveArtistsUpdate(self, **kwargs):
updater.dbUpdate()
@@ -432,7 +441,6 @@ class Api(object):
results_as_dicts = []
for result in results:
result_dict = {
'title': result[0],
'size': result[1],

View File

@@ -14,8 +14,8 @@
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
import os
import headphones
import headphones
from headphones import db, helpers, logger, lastfm, request
LASTFM_API_KEY = "690e1ed3bc00bc91804cd8f7fe5ed6d4"
@@ -45,8 +45,8 @@ class Cache(object):
def __init__(self):
self.id = None
self.id_type = None # 'artist' or 'album' - set automatically depending on whether ArtistID or AlbumID is passed
self.query_type = None # 'artwork','thumb' or 'info' - set automatically
self.id_type = None # 'artist' or 'album' - set automatically depending on whether ArtistID or AlbumID is passed
self.query_type = None # 'artwork','thumb' or 'info' - set automatically
self.artwork_files = []
self.thumb_files = []
@@ -182,13 +182,18 @@ class Cache(object):
if ArtistID:
self.id = ArtistID
self.id_type = 'artist'
db_info = myDB.action('SELECT Summary, Content, LastUpdated FROM descriptions WHERE ArtistID=?', [self.id]).fetchone()
db_info = myDB.action(
'SELECT Summary, Content, LastUpdated FROM descriptions WHERE ArtistID=?',
[self.id]).fetchone()
else:
self.id = AlbumID
self.id_type = 'album'
db_info = myDB.action('SELECT Summary, Content, LastUpdated FROM descriptions WHERE ReleaseGroupID=?', [self.id]).fetchone()
db_info = myDB.action(
'SELECT Summary, Content, LastUpdated FROM descriptions WHERE ReleaseGroupID=?',
[self.id]).fetchone()
if not db_info or not db_info['LastUpdated'] or not self._is_current(date=db_info['LastUpdated']):
if not db_info or not db_info['LastUpdated'] or not self._is_current(
date=db_info['LastUpdated']):
self._update_cache()
info_dict = {'Summary': self.info_summary, 'Content': self.info_content}
@@ -309,13 +314,19 @@ class Cache(object):
logger.debug('No artist thumbnail image found')
else:
dbalbum = myDB.action('SELECT ArtistName, AlbumTitle, ReleaseID FROM albums WHERE AlbumID=?', [self.id]).fetchone()
dbalbum = myDB.action(
'SELECT ArtistName, AlbumTitle, ReleaseID FROM albums WHERE AlbumID=?',
[self.id]).fetchone()
if dbalbum['ReleaseID'] != self.id:
data = lastfm.request_lastfm("album.getinfo", mbid=dbalbum['ReleaseID'], api_key=LASTFM_API_KEY)
data = lastfm.request_lastfm("album.getinfo", mbid=dbalbum['ReleaseID'],
api_key=LASTFM_API_KEY)
if not data:
data = lastfm.request_lastfm("album.getinfo", artist=dbalbum['ArtistName'], album=dbalbum['AlbumTitle'], api_key=LASTFM_API_KEY)
data = lastfm.request_lastfm("album.getinfo", artist=dbalbum['ArtistName'],
album=dbalbum['AlbumTitle'],
api_key=LASTFM_API_KEY)
else:
data = lastfm.request_lastfm("album.getinfo", artist=dbalbum['ArtistName'], album=dbalbum['AlbumTitle'], api_key=LASTFM_API_KEY)
data = lastfm.request_lastfm("album.getinfo", artist=dbalbum['ArtistName'],
album=dbalbum['AlbumTitle'], api_key=LASTFM_API_KEY)
if not data:
return
@@ -357,7 +368,8 @@ class Cache(object):
# Save the image URL to the database
if image_url:
if self.id_type == 'artist':
myDB.action('UPDATE artists SET ArtworkURL=? WHERE ArtistID=?', [image_url, self.id])
myDB.action('UPDATE artists SET ArtworkURL=? WHERE ArtistID=?',
[image_url, self.id])
else:
myDB.action('UPDATE albums SET ArtworkURL=? WHERE AlbumID=?', [image_url, self.id])
@@ -378,7 +390,8 @@ class Cache(object):
if not os.path.isdir(self.path_to_art_cache):
try:
os.makedirs(self.path_to_art_cache)
os.chmod(self.path_to_art_cache, int(headphones.CONFIG.FOLDER_PERMISSIONS, 8))
os.chmod(self.path_to_art_cache,
int(headphones.CONFIG.FOLDER_PERMISSIONS, 8))
except OSError as e:
logger.error('Unable to create artwork cache dir. Error: %s', e)
self.artwork_errors = True
@@ -393,7 +406,8 @@ class Cache(object):
ext = os.path.splitext(image_url)[1]
artwork_path = os.path.join(self.path_to_art_cache, self.id + '.' + helpers.today() + ext)
artwork_path = os.path.join(self.path_to_art_cache,
self.id + '.' + helpers.today() + ext)
try:
with open(artwork_path, 'wb') as f:
f.write(artwork)
@@ -406,7 +420,8 @@ class Cache(object):
# Grab the thumbnail as well if we're getting the full artwork (as long
# as it's missing/outdated.
if thumb_url and self.query_type in ['thumb', 'artwork'] and not (self.thumb_files and self._is_current(self.thumb_files[0])):
if thumb_url and self.query_type in ['thumb', 'artwork'] and not (
self.thumb_files and self._is_current(self.thumb_files[0])):
artwork = request.request_content(thumb_url, timeout=20)
if artwork:
@@ -414,7 +429,8 @@ class Cache(object):
if not os.path.isdir(self.path_to_art_cache):
try:
os.makedirs(self.path_to_art_cache)
os.chmod(self.path_to_art_cache, int(headphones.CONFIG.FOLDER_PERMISSIONS, 8))
os.chmod(self.path_to_art_cache,
int(headphones.CONFIG.FOLDER_PERMISSIONS, 8))
except OSError as e:
logger.error('Unable to create artwork cache dir. Error: %s' + e)
self.thumb_errors = True
@@ -429,7 +445,8 @@ class Cache(object):
ext = os.path.splitext(image_url)[1]
thumb_path = os.path.join(self.path_to_art_cache, 'T_' + self.id + '.' + helpers.today() + ext)
thumb_path = os.path.join(self.path_to_art_cache,
'T_' + self.id + '.' + helpers.today() + ext)
try:
with open(thumb_path, 'wb') as f:
f.write(artwork)

View File

@@ -13,9 +13,9 @@
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
#########################################
## Stolen from Sick-Beard's classes.py ##
#########################################
#######################################
# Stolen from Sick-Beard's classes.py #
#######################################
import urllib
@@ -133,4 +133,5 @@ class Proper:
self.episode = -1
def __str__(self):
return str(self.date) + " " + self.name + " " + str(self.season) + "x" + str(self.episode) + " of " + str(self.tvdbid)
return str(self.date) + " " + self.name + " " + str(self.season) + "x" + str(
self.episode) + " of " + str(self.tvdbid)

View File

@@ -20,15 +20,16 @@ Created on Aug 1, 2011
'''
import platform
import operator
import os
import re
from headphones import version
#Identify Our Application
# Identify Our Application
USER_AGENT = 'Headphones/-' + version.HEADPHONES_VERSION + ' (' + platform.system() + ' ' + platform.release() + ')'
### Notification Types
# Notification Types
NOTIFY_SNATCH = 1
NOTIFY_DOWNLOAD = 2
@@ -36,26 +37,25 @@ notifyStrings = {}
notifyStrings[NOTIFY_SNATCH] = "Started Download"
notifyStrings[NOTIFY_DOWNLOAD] = "Download Finished"
### Release statuses
UNKNOWN = -1 # should never happen
UNAIRED = 1 # releases that haven't dropped yet
SNATCHED = 2 # qualified with quality
WANTED = 3 # releases we don't have but want to get
DOWNLOADED = 4 # qualified with quality
SKIPPED = 5 # releases we don't want
ARCHIVED = 6 # releases that you don't have locally (counts toward download completion stats)
IGNORED = 7 # releases that you don't want included in your download stats
SNATCHED_PROPER = 9 # qualified with quality
# Release statuses
UNKNOWN = -1 # should never happen
UNAIRED = 1 # releases that haven't dropped yet
SNATCHED = 2 # qualified with quality
WANTED = 3 # releases we don't have but want to get
DOWNLOADED = 4 # qualified with quality
SKIPPED = 5 # releases we don't want
ARCHIVED = 6 # releases that you don't have locally (counts toward download completion stats)
IGNORED = 7 # releases that you don't want included in your download stats
SNATCHED_PROPER = 9 # qualified with quality
class Quality:
NONE = 0
B192 = 1 << 1 # 2
VBR = 1 << 2 # 4
B256 = 1 << 3 # 8
B320 = 1 << 4 #16
FLAC = 1 << 5 #32
B192 = 1 << 1 # 2
VBR = 1 << 2 # 4
B256 = 1 << 3 # 8
B320 = 1 << 4 # 16
FLAC = 1 << 5 # 32
# put these bits at the other end of the spectrum, far enough out that they shouldn't interfere
UNKNOWN = 1 << 15
@@ -75,7 +75,8 @@ class Quality:
def _getStatusStrings(status):
toReturn = {}
for x in Quality.qualityStrings.keys():
toReturn[Quality.compositeStatus(status, x)] = Quality.statusPrefixes[status] + " (" + Quality.qualityStrings[x] + ")"
toReturn[Quality.compositeStatus(status, x)] = Quality.statusPrefixes[status] + " (" + \
Quality.qualityStrings[x] + ")"
return toReturn
@staticmethod
@@ -103,6 +104,9 @@ class Quality:
@staticmethod
def nameQuality(name):
def checkName(list, func):
return func([re.search(x, name, re.I) for x in list])
name = os.path.basename(name)
# if we have our exact text then assume we put it there
@@ -115,9 +119,7 @@ class Quality:
if regex_match:
return x
checkName = lambda list, func: func([re.search(x, name, re.I) for x in list])
#TODO: fix quality checking here
# TODO: fix quality checking here
if checkName(["mp3", "192"], any) and not checkName(["flac"], all):
return Quality.B192
elif checkName(["mp3", "256"], any) and not checkName(["flac"], all):
@@ -131,7 +133,6 @@ class Quality:
@staticmethod
def assumeQuality(name):
if name.lower().endswith(".mp3"):
return Quality.MP3
elif name.lower().endswith(".flac"):
@@ -167,13 +168,16 @@ class Quality:
SNATCHED = None
SNATCHED_PROPER = None
Quality.DOWNLOADED = [Quality.compositeStatus(DOWNLOADED, x) for x in Quality.qualityStrings.keys()]
Quality.SNATCHED = [Quality.compositeStatus(SNATCHED, x) for x in Quality.qualityStrings.keys()]
Quality.SNATCHED_PROPER = [Quality.compositeStatus(SNATCHED_PROPER, x) for x in Quality.qualityStrings.keys()]
Quality.SNATCHED_PROPER = [Quality.compositeStatus(SNATCHED_PROPER, x) for x in
Quality.qualityStrings.keys()]
MP3 = Quality.combineQualities([Quality.B192, Quality.B256, Quality.B320, Quality.VBR], [])
LOSSLESS = Quality.combineQualities([Quality.FLAC], [])
ANY = Quality.combineQualities([Quality.B192, Quality.B256, Quality.B320, Quality.VBR, Quality.FLAC], [])
ANY = Quality.combineQualities(
[Quality.B192, Quality.B256, Quality.B320, Quality.VBR, Quality.FLAC], [])
qualityPresets = (MP3, LOSSLESS, ANY)
qualityPresetStrings = {MP3: "MP3 (All bitrates 192+)",

View File

@@ -1,7 +1,8 @@
import headphones.logger
import itertools
import os
import re
import headphones.logger
from configobj import ConfigObj
@@ -14,6 +15,7 @@ def bool_int(value):
value = 0
return int(bool(value))
_CONFIG_DEFINITIONS = {
'ADD_ALBUM_ART': (int, 'General', 0),
'ADVANCEDENCODER': (str, 'General', ''),
@@ -85,8 +87,10 @@ _CONFIG_DEFINITIONS = {
'EXTRA_TORZNABS': (list, 'Torznab', ''),
'FILE_FORMAT': (str, 'General', 'Track Artist - Album [Year] - Title'),
'FILE_PERMISSIONS': (str, 'General', '0644'),
'FILE_PERMISSIONS_ENABLED': (bool_int, 'General', True),
'FILE_UNDERSCORES': (int, 'General', 0),
'FOLDER_FORMAT': (str, 'General', 'Artist/Album [Year]'),
'FOLDER_PERMISSIONS_ENABLED': (bool_int, 'General', True),
'FOLDER_PERMISSIONS': (str, 'General', '0755'),
'FREEZE_DB': (int, 'General', 0),
'GIT_BRANCH': (str, 'General', 'master'),
@@ -160,7 +164,7 @@ _CONFIG_DEFINITIONS = {
'OMGWTFNZBS': (int, 'omgwtfnzbs', 0),
'OMGWTFNZBS_APIKEY': (str, 'omgwtfnzbs', ''),
'OMGWTFNZBS_UID': (str, 'omgwtfnzbs', ''),
'OPEN_MAGNET_LINKS': (int, 'General', 0), # 0: Ignore, 1: Open, 2: Convert
'OPEN_MAGNET_LINKS': (int, 'General', 0), # 0: Ignore, 1: Open, 2: Convert, 3: Embed (rtorrent)
'MAGNET_LINKS': (int, 'General', 0),
'OSX_NOTIFY_APP': (str, 'OSX_Notify', '/Applications/Headphones'),
'OSX_NOTIFY_ENABLED': (int, 'OSX_Notify', 0),
@@ -203,6 +207,8 @@ _CONFIG_DEFINITIONS = {
'PUSHOVER_ONSNATCH': (int, 'Pushover', 0),
'PUSHOVER_PRIORITY': (int, 'Pushover', 0),
'RENAME_FILES': (int, 'General', 0),
'RENAME_UNPROCESSED': (bool_int, 'General', 1),
'RENAME_FROZEN': (bool_int, 'General', 1),
'REPLACE_EXISTING_FOLDERS': (int, 'General', 0),
'KEEP_ORIGINAL_FOLDER': (int, 'General', 0),
'REQUIRED_WORDS': (str, 'General', ''),
@@ -250,7 +256,7 @@ _CONFIG_DEFINITIONS = {
'UTORRENT_PASSWORD': (str, 'uTorrent', ''),
'UTORRENT_USERNAME': (str, 'uTorrent', ''),
'VERIFY_SSL_CERT': (bool_int, 'Advanced', 1),
'WAIT_UNTIL_RELEASE_DATE' : (int, 'General', 0),
'WAIT_UNTIL_RELEASE_DATE': (int, 'General', 0),
'WAFFLES': (int, 'Waffles', 0),
'WAFFLES_PASSKEY': (str, 'Waffles', ''),
'WAFFLES_RATIO': (str, 'Waffles', ''),
@@ -268,6 +274,7 @@ _CONFIG_DEFINITIONS = {
'XLDPROFILE': (str, 'General', '')
}
# pylint:disable=R0902
# it might be nice to refactor for fewer instance variables
class Config(object):
@@ -344,7 +351,7 @@ class Config(object):
""" Return the extra newznab tuples """
extra_newznabs = list(
itertools.izip(*[itertools.islice(self.EXTRA_NEWZNABS, i, None, 3)
for i in range(3)])
for i in range(3)])
)
return extra_newznabs
@@ -363,7 +370,7 @@ class Config(object):
""" Return the extra torznab tuples """
extra_torznabs = list(
itertools.izip(*[itertools.islice(self.EXTRA_TORZNABS, i, None, 3)
for i in range(3)])
for i in range(3)])
)
return extra_torznabs

View File

@@ -15,13 +15,13 @@
# Most of this lifted from here: https://github.com/SzieberthAdam/gneposis-cdgrab
import os
import sys
import re
import subprocess
import copy
import glob
import os
import re
import headphones
from headphones import logger
from mutagen.flac import FLAC
@@ -62,7 +62,7 @@ WAVE_FILE_TYPE_BY_EXTENSION = {
'.flac': 'Free Lossless Audio Codec'
}
#SHNTOOL_COMPATIBLE = ("Free Lossless Audio Codec", "Waveform Audio", "Monkey's Audio")
# SHNTOOL_COMPATIBLE = ("Free Lossless Audio Codec", "Waveform Audio", "Monkey's Audio")
# TODO: Make this better!
# this module-level variable is bad. :(
@@ -288,7 +288,7 @@ class CueFile(File):
global line_content
c = self.content.splitlines()
header_dict = {}
#remaining_headers = CUE_HEADER
# remaining_headers = CUE_HEADER
remaining_headers = copy.copy(CUE_HEADER)
line_index = 0
match = True
@@ -314,7 +314,8 @@ class CueFile(File):
line_content = c[line_index]
search_result = re.search(CUE_TRACK, line_content, re.I)
if not search_result:
raise ValueError('inconsistent CUE sheet, TRACK expected at line {0}'.format(line_index + 1))
raise ValueError(
'inconsistent CUE sheet, TRACK expected at line {0}'.format(line_index + 1))
track_nr = int(search_result.group(1))
line_index += 1
next_track = False
@@ -353,7 +354,8 @@ class CueFile(File):
track_meta['dcpflag'] = True
line_index += 1
else:
raise ValueError('unknown entry in track error, line {0}'.format(line_index + 1))
raise ValueError(
'unknown entry in track error, line {0}'.format(line_index + 1))
else:
next_track = True
@@ -371,8 +373,8 @@ class CueFile(File):
if not self.content:
try:
with open(self.name, encoding="cp1252") as cue_file:
self.content = cue_file.read()
with open(self.name, encoding="cp1252") as cue_file:
self.content = cue_file.read()
except:
raise ValueError('Cant encode CUE Sheet.')
@@ -406,9 +408,11 @@ class CueFile(File):
for i in range(len(self.tracks)):
if self.tracks[i]:
if self.tracks[i].get('artist'):
content += 'track' + int_to_str(i) + 'artist' + '\t' + self.tracks[i].get('artist') + '\n'
content += 'track' + int_to_str(i) + 'artist' + '\t' + self.tracks[i].get(
'artist') + '\n'
if self.tracks[i].get('title'):
content += 'track' + int_to_str(i) + 'title' + '\t' + self.tracks[i].get('title') + '\n'
content += 'track' + int_to_str(i) + 'title' + '\t' + self.tracks[i].get(
'title') + '\n'
return content
def htoa(self):
@@ -449,7 +453,8 @@ class MetaFile(File):
raise ValueError('Syntax error in album meta file')
if not content['tracks'][int(parsed_track.group(1))]:
content['tracks'][int(parsed_track.group(1))] = dict()
content['tracks'][int(parsed_track.group(1))][parsed_track.group(2)] = parsed_line.group(2)
content['tracks'][int(parsed_track.group(1))][
parsed_track.group(2)] = parsed_line.group(2)
else:
content[parsed_line.group(1)] = parsed_line.group(2)
@@ -472,15 +477,16 @@ class MetaFile(File):
if 'genre' in CUE_META.content:
common_tags['genre'] = CUE_META.content['genre']
#freeform tags
#freeform_tags['country'] = self.content['country']
#freeform_tags['releasedate'] = self.content['releasedate']
# freeform tags
# freeform_tags['country'] = self.content['country']
# freeform_tags['releasedate'] = self.content['releasedate']
return common_tags, freeform_tags
def folders(self):
artist = self.content['artist']
album = self.content['date'] + ' - ' + self.content['title'] + ' (' + self.content['label'] + ' - ' + self.content['catalog'] + ')'
album = self.content['date'] + ' - ' + self.content['title'] + ' (' + self.content[
'label'] + ' - ' + self.content['catalog'] + ')'
return artist, album
def complete(self):
@@ -535,6 +541,7 @@ class WaveFile(File):
if self.type == 'Free Lossless Audio Codec':
return FLAC(self.name)
def split(albumpath):
global CUE_META
os.chdir(albumpath)
@@ -577,7 +584,8 @@ def split(albumpath):
import getXldProfile
xldprofile, xldformat, _ = getXldProfile.getXldProfile(headphones.CONFIG.XLDPROFILE)
if not xldformat:
raise ValueError('Details for xld profile "%s" not found, cannot split cue' % (xldprofile))
raise ValueError(
'Details for xld profile "%s" not found, cannot split cue' % (xldprofile))
else:
if headphones.CONFIG.ENCODERFOLDER:
splitter = os.path.join(headphones.CONFIG.ENCODERFOLDER, 'xld')
@@ -590,7 +598,7 @@ def split(albumpath):
splitter = 'shntool'
if splitter == 'shntool' and not check_splitter(splitter):
raise ValueError('Command not found, ensure shntool or xld installed')
raise ValueError('Command not found, ensure shntool or xld installed')
# Determine if file can be split
if wave.name_ext not in WAVE_FILE_TYPE_BY_EXTENSION.keys():

View File

@@ -13,44 +13,41 @@
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
#####################################
## Stolen from Sick-Beard's db.py ##
#####################################
###################################
# Stolen from Sick-Beard's db.py #
###################################
from __future__ import with_statement
import os
import sqlite3
import os
import headphones
from headphones import logger
def dbFilename(filename="headphones.db"):
return os.path.join(headphones.DATA_DIR, filename)
def getCacheSize():
#this will protect against typecasting problems produced by empty string and None settings
# this will protect against typecasting problems produced by empty string and None settings
if not headphones.CONFIG.CACHE_SIZEMB:
#sqlite will work with this (very slowly)
# sqlite will work with this (very slowly)
return 0
return int(headphones.CONFIG.CACHE_SIZEMB)
class DBConnection:
def __init__(self, filename="headphones.db"):
self.filename = filename
self.connection = sqlite3.connect(dbFilename(filename), timeout=20)
#don't wait for the disk to finish writing
# don't wait for the disk to finish writing
self.connection.execute("PRAGMA synchronous = OFF")
#journal disabled since we never do rollbacks
# journal disabled since we never do rollbacks
self.connection.execute("PRAGMA journal_mode = %s" % headphones.CONFIG.JOURNAL_MODE)
#64mb of cache memory,probably need to make it user configurable
# 64mb of cache memory,probably need to make it user configurable
self.connection.execute("PRAGMA cache_size=-%s" % (getCacheSize() * 1024))
self.connection.row_factory = sqlite3.Row
@@ -92,17 +89,20 @@ class DBConnection:
def upsert(self, tableName, valueDict, keyDict):
def genParams(myDict):
return [x + " = ?" for x in myDict.keys()]
changesBefore = self.connection.total_changes
genParams = lambda myDict: [x + " = ?" for x in myDict.keys()]
update_query = "UPDATE " + tableName + " SET " + ", ".join(genParams(valueDict)) + " WHERE " + " AND ".join(genParams(keyDict))
update_query = "UPDATE " + tableName + " SET " + ", ".join(
genParams(valueDict)) + " WHERE " + " AND ".join(genParams(keyDict))
self.action(update_query, valueDict.values() + keyDict.values())
if self.connection.total_changes == changesBefore:
insert_query = (
"INSERT INTO " + tableName + " (" + ", ".join(valueDict.keys() + keyDict.keys()) + ")" +
"INSERT INTO " + tableName + " (" + ", ".join(
valueDict.keys() + keyDict.keys()) + ")" +
" VALUES (" + ", ".join(["?"] * len(valueDict.keys() + keyDict.keys())) + ")"
)
try:

View File

@@ -1,29 +1,29 @@
import os.path
import biplist
from headphones import logger
def getXldProfile(xldProfile):
xldProfileNotFound = xldProfile
expanded = os.path.expanduser('~/Library/Preferences/jp.tmkk.XLD.plist')
if not os.path.isfile(expanded):
logger.warn("Could not find xld preferences at: %s", expanded)
return(xldProfileNotFound, None, None)
return (xldProfileNotFound, None, None)
# Get xld preferences plist
try:
preferences = biplist.readPlist(expanded)
except (biplist.InvalidPlistException, biplist.NotBinaryPlistException), e:
logger.error("Error reading xld preferences plist: %s", e)
return(xldProfileNotFound, None, None)
return (xldProfileNotFound, None, None)
if not isinstance(preferences, dict):
logger.error("Error reading xld preferences plist, not a dict: %r", preferences)
return(xldProfileNotFound, None, None)
return (xldProfileNotFound, None, None)
profiles = preferences.get('Profiles', []) # pylint:disable=E1103
profiles = preferences.get('Profiles', []) # pylint:disable=E1103
xldProfile = xldProfile.lower()
for profile in profiles:
@@ -175,6 +175,6 @@ def getXldProfile(xldProfile):
if xldFormat and not xldBitrate:
xldBitrate = 400
return(xldProfileForCmd, xldFormat, xldBitrate)
return (xldProfileForCmd, xldFormat, xldBitrate)
return(xldProfileNotFound, None, None)
return (xldProfileNotFound, None, None)

View File

@@ -13,19 +13,19 @@
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
from beets.mediafile import MediaFile, FileTypeError, UnreadableFileError
from operator import itemgetter
import unicodedata
import headphones
import datetime
import fnmatch
import shutil
import time
import sys
import fnmatch
import re
import os
from beets.mediafile import MediaFile, FileTypeError, UnreadableFileError
import headphones
# Modified from https://github.com/Verrus/beets-plugin-featInTitle
RE_FEATURING = re.compile(r"[fF]t\.|[fF]eaturing|[fF]eat\.|\b[wW]ith\b|&|vs\.")
@@ -35,7 +35,9 @@ RE_CD = re.compile(r"^(CD|dics)\s*[0-9]+$", re.I)
def multikeysort(items, columns):
comparers = [((itemgetter(col[1:].strip()), -1) if col.startswith('-') else (itemgetter(col.strip()), 1)) for col in columns]
comparers = [
((itemgetter(col[1:].strip()), -1) if col.startswith('-') else (itemgetter(col.strip()), 1))
for col in columns]
def comparer(left, right):
for fn, mult in comparers:
@@ -56,7 +58,6 @@ def checked(variable):
def radio(variable, pos):
if variable == pos:
return 'Checked'
else:
@@ -70,7 +71,7 @@ def latinToAscii(unicrap):
xlate = {
0xc0: 'A', 0xc1: 'A', 0xc2: 'A', 0xc3: 'A', 0xc4: 'A', 0xc5: 'A',
0xc6: 'Ae', 0xc7: 'C',
0xc8: 'E', 0xc9: 'E', 0xca: 'E', 0xcb: 'E', 0x86: 'e',
0xc8: 'E', 0xc9: 'E', 0xca: 'E', 0xcb: 'E', 0x86: 'e', 0x39e: 'E',
0xcc: 'I', 0xcd: 'I', 0xce: 'I', 0xcf: 'I',
0xd0: 'Th', 0xd1: 'N',
0xd2: 'O', 0xd3: 'O', 0xd4: 'O', 0xd5: 'O', 0xd6: 'O', 0xd8: 'O',
@@ -107,7 +108,6 @@ def latinToAscii(unicrap):
def convert_milliseconds(ms):
seconds = ms / 1000
gmtime = time.gmtime(seconds)
if seconds > 3600:
@@ -119,7 +119,6 @@ def convert_milliseconds(ms):
def convert_seconds(s):
gmtime = time.gmtime(s)
if s > 3600:
minutes = time.strftime("%H:%M:%S", gmtime)
@@ -141,7 +140,6 @@ def now():
def get_age(date):
try:
split_date = date.split('-')
except:
@@ -149,14 +147,13 @@ def get_age(date):
try:
days_old = int(split_date[0]) * 365 + int(split_date[1]) * 30 + int(split_date[2])
except (IndexError,ValueError):
except (IndexError, ValueError):
days_old = False
return days_old
def bytes_to_mb(bytes):
mb = int(bytes) / 1048576
size = '%.1f MB' % mb
return size
@@ -172,7 +169,7 @@ def piratesize(size):
split = size.split(" ")
factor = float(split[0])
unit = split[1].upper()
if unit == 'MIB':
size = factor * 1048576
elif unit == 'MB':
@@ -194,7 +191,6 @@ def piratesize(size):
def replace_all(text, dic, normalize=False):
if not text:
return ''
@@ -221,15 +217,14 @@ def replace_illegal_chars(string, type="file"):
def cleanName(string):
pass1 = latinToAscii(string).lower()
out_string = re.sub('[\.\-\/\!\@\#\$\%\^\&\*\(\)\+\-\"\'\,\;\:\[\]\{\}\<\>\=\_]', '', pass1).encode('utf-8')
out_string = re.sub('[\.\-\/\!\@\#\$\%\^\&\*\(\)\+\-\"\'\,\;\:\[\]\{\}\<\>\=\_]', '',
pass1).encode('utf-8')
return out_string
def cleanTitle(title):
title = re.sub('[\.\-\/\_]', ' ', title).lower()
# Strip out extra whitespace
@@ -312,16 +307,22 @@ def expand_subfolders(f):
difference = max(path_depths) - min(path_depths)
if difference > 0:
logger.info("Found %d media folders, but depth difference between lowest and deepest media folder is %d (expected zero). If this is a discography or a collection of albums, make sure albums are per folder.", len(media_folders), difference)
logger.info(
"Found %d media folders, but depth difference between lowest and deepest media folder is %d (expected zero). If this is a discography or a collection of albums, make sure albums are per folder.",
len(media_folders), difference)
# While already failed, advice the user what he could try. We assume the
# directory may contain separate CD's and maybe some extra's. The
# structure may look like X albums at same depth, and (one or more)
# extra folders with a higher depth.
extra_media_folders = [media_folder[:min(path_depths)] for media_folder in media_folders if len(media_folder) > min(path_depths)]
extra_media_folders = list(set([os.path.join(*media_folder) for media_folder in extra_media_folders]))
extra_media_folders = [media_folder[:min(path_depths)] for media_folder in media_folders if
len(media_folder) > min(path_depths)]
extra_media_folders = list(
set([os.path.join(*media_folder) for media_folder in extra_media_folders]))
logger.info("Please look at the following folder(s), since they cause the depth difference: %s", extra_media_folders)
logger.info(
"Please look at the following folder(s), since they cause the depth difference: %s",
extra_media_folders)
return
# Convert back to paths and remove duplicates, which may be there after
@@ -368,7 +369,7 @@ def path_filter_patterns(paths, patterns, root=None):
for path in paths[:]:
if path_match_patterns(path, patterns):
logger.debug("Path ignored by pattern: %s",
os.path.join(root or "", path))
os.path.join(root or "", path))
ignored += 1
paths.remove(path)
@@ -378,11 +379,11 @@ def path_filter_patterns(paths, patterns, root=None):
def extract_data(s):
s = s.replace('_', ' ')
#headphones default format
pattern = re.compile(r'(?P<name>.*?)\s\-\s(?P<album>.*?)\s[\[\(](?P<year>.*?)[\]\)]', re.VERBOSE)
# headphones default format
pattern = re.compile(r'(?P<name>.*?)\s\-\s(?P<album>.*?)\s[\[\(](?P<year>.*?)[\]\)]',
re.VERBOSE)
match = pattern.match(s)
if match:
@@ -391,7 +392,7 @@ def extract_data(s):
year = match.group("year")
return (name, album, year)
#Gonna take a guess on this one - might be enough to search on mb
# Gonna take a guess on this one - might be enough to search on mb
pat = re.compile(r"(?P<name>.*?)\s*-\s*(?P<album>[^\[(-]*)")
match = pat.match(s)
@@ -468,7 +469,8 @@ def extract_metadata(f):
old_album = new_albums[index]
new_albums[index] = RE_CD_ALBUM.sub("", album).strip()
logger.debug("Stripped albumd number identifier: %s -> %s", old_album, new_albums[index])
logger.debug("Stripped albumd number identifier: %s -> %s", old_album,
new_albums[index])
# Remove duplicates
new_albums = list(set(new_albums))
@@ -498,7 +500,8 @@ def extract_metadata(f):
return (artist, albums[0], years[0])
# Not sure what to do here.
logger.info("Found %d artists, %d albums and %d years in metadata, so ignoring", len(artists), len(albums), len(years))
logger.info("Found %d artists, %d albums and %d years in metadata, so ignoring", len(artists),
len(albums), len(years))
logger.debug("Artists: %s, Albums: %s, Years: %s", artists, albums, years)
return (None, None, None)
@@ -524,8 +527,10 @@ def preserve_torrent_directory(albumpath):
Copy torrent directory to headphones-modified to keep files for seeding.
"""
from headphones import logger
new_folder = os.path.join(albumpath, 'headphones-modified'.encode(headphones.SYS_ENCODING, 'replace'))
logger.info("Copying files to 'headphones-modified' subfolder to preserve downloaded files for seeding")
new_folder = os.path.join(albumpath,
'headphones-modified'.encode(headphones.SYS_ENCODING, 'replace'))
logger.info(
"Copying files to 'headphones-modified' subfolder to preserve downloaded files for seeding")
try:
shutil.copytree(albumpath, new_folder)
return new_folder
@@ -578,7 +583,9 @@ def cue_split(albumpath):
def extract_logline(s):
# Default log format
pattern = re.compile(r'(?P<timestamp>.*?)\s\-\s(?P<level>.*?)\s*\:\:\s(?P<thread>.*?)\s\:\s(?P<message>.*)', re.VERBOSE)
pattern = re.compile(
r'(?P<timestamp>.*?)\s\-\s(?P<level>.*?)\s*\:\:\s(?P<thread>.*?)\s\:\s(?P<message>.*)',
re.VERBOSE)
match = pattern.match(s)
if match:
timestamp = match.group("timestamp")
@@ -593,7 +600,7 @@ def extract_logline(s):
def extract_song_data(s):
from headphones import logger
#headphones default format
# headphones default format
pattern = re.compile(r'(?P<name>.*?)\s\-\s(?P<album>.*?)\s\[(?P<year>.*?)\]', re.VERBOSE)
match = pattern.match(s)
@@ -605,7 +612,7 @@ def extract_song_data(s):
else:
logger.info("Couldn't parse %s into a valid default format", s)
#newzbin default format
# newzbin default format
pattern = re.compile(r'(?P<name>.*?)\s\-\s(?P<album>.*?)\s\((?P<year>\d+?\))', re.VERBOSE)
match = pattern.match(s)
if match:
@@ -619,7 +626,6 @@ def extract_song_data(s):
def smartMove(src, dest, delete=True):
from headphones import logger
source_dir = os.path.dirname(src)
@@ -640,7 +646,8 @@ def smartMove(src, dest, delete=True):
os.rename(src, os.path.join(source_dir, newfile))
filename = newfile
except Exception as e:
logger.warn('Error renaming %s: %s', src.decode(headphones.SYS_ENCODING, 'replace'), e)
logger.warn('Error renaming %s: %s',
src.decode(headphones.SYS_ENCODING, 'replace'), e)
break
try:
@@ -650,7 +657,9 @@ def smartMove(src, dest, delete=True):
shutil.copy(os.path.join(source_dir, filename), os.path.join(dest, filename))
return True
except Exception as e:
logger.warn('Error moving file %s: %s', filename.decode(headphones.SYS_ENCODING, 'replace'), e)
logger.warn('Error moving file %s: %s', filename.decode(headphones.SYS_ENCODING, 'replace'),
e)
def walk_directory(basedir, followlinks=True):
"""
@@ -672,8 +681,8 @@ def walk_directory(basedir, followlinks=True):
real_path = os.path.abspath(os.readlink(path))
if real_path in traversed:
logger.debug("Skipping '%s' since it is a symlink to "\
"'%s', which is already visited.", path, real_path)
logger.debug("Skipping '%s' since it is a symlink to " \
"'%s', which is already visited.", path, real_path)
else:
traversed.append(real_path)
@@ -689,8 +698,9 @@ def walk_directory(basedir, followlinks=True):
for result in _inner(*args):
yield result
#########################
#Sab renaming functions #
# Sab renaming functions #
#########################
# TODO: Grab config values from sab to know when these options are checked. For now we'll just iterate through all combinations
@@ -739,18 +749,20 @@ def sab_sanitize_foldername(name):
if not name:
name = 'unknown'
#maxlen = cfg.folder_max_length()
#if len(name) > maxlen:
# maxlen = cfg.folder_max_length()
# if len(name) > maxlen:
# name = name[:maxlen]
return name
def split_string(mystring, splitvar=','):
mylist = []
for each_word in mystring.split(splitvar):
mylist.append(each_word.strip())
return mylist
def create_https_certificates(ssl_cert, ssl_key):
"""
Create a pair of self-signed HTTPS certificares and store in them in
@@ -768,11 +780,13 @@ def create_https_certificates(ssl_cert, ssl_key):
# Create the CA Certificate
cakey = createKeyPair(TYPE_RSA, 2048)
careq = createCertRequest(cakey, CN="Certificate Authority")
cacert = createCertificate(careq, (careq, cakey), serial, (0, 60 * 60 * 24 * 365 * 10)) # ten years
cacert = createCertificate(careq, (careq, cakey), serial,
(0, 60 * 60 * 24 * 365 * 10)) # ten years
pkey = createKeyPair(TYPE_RSA, 2048)
req = createCertRequest(pkey, CN="Headphones")
cert = createCertificate(req, (cacert, cakey), serial, (0, 60 * 60 * 24 * 365 * 10)) # ten years
cert = createCertificate(req, (cacert, cakey), serial,
(0, 60 * 60 * 24 * 365 * 10)) # ten years
# Save the key and certificate to disk
try:

View File

@@ -13,39 +13,39 @@
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
from headphones import logger, helpers, db, mb, lastfm, metacritic
from beets.mediafile import MediaFile
import time
from headphones import logger, helpers, db, mb, lastfm, metacritic
from beets.mediafile import MediaFile
import headphones
blacklisted_special_artist_names = ['[anonymous]', '[data]', '[no artist]',
'[traditional]', '[unknown]', 'Various Artists']
'[traditional]', '[unknown]', 'Various Artists']
blacklisted_special_artists = ['f731ccc4-e22a-43af-a747-64213329e088',
'33cf029c-63b0-41a0-9855-be2a3665fb3b',
'314e1c25-dde7-4e4d-b2f4-0a7b9f7c56dc',
'eec63d3c-3b81-4ad4-b1e4-7c147d4d2b61',
'9be7f096-97ec-4615-8957-8d40b5dcbc41',
'125ec42a-7229-4250-afc5-e057484327fe',
'89ad4ac3-39f7-470e-963a-56509c546377']
'33cf029c-63b0-41a0-9855-be2a3665fb3b',
'314e1c25-dde7-4e4d-b2f4-0a7b9f7c56dc',
'eec63d3c-3b81-4ad4-b1e4-7c147d4d2b61',
'9be7f096-97ec-4615-8957-8d40b5dcbc41',
'125ec42a-7229-4250-afc5-e057484327fe',
'89ad4ac3-39f7-470e-963a-56509c546377']
def is_exists(artistid):
myDB = db.DBConnection()
# See if the artist is already in the database
artistlist = myDB.select('SELECT ArtistID, ArtistName from artists WHERE ArtistID=?', [artistid])
artistlist = myDB.select('SELECT ArtistID, ArtistName from artists WHERE ArtistID=?',
[artistid])
if any(artistid in x for x in artistlist):
logger.info(artistlist[0][1] + u" is already in the database. Updating 'have tracks', but not artist information")
logger.info(artistlist[0][
1] + u" is already in the database. Updating 'have tracks', but not artist information")
return True
else:
return False
def artistlist_to_mbids(artistlist, forced=False):
for artist in artistlist:
if not artist and artist != ' ':
@@ -77,9 +77,12 @@ def artistlist_to_mbids(artistlist, forced=False):
myDB = db.DBConnection()
if not forced:
bl_artist = myDB.action('SELECT * FROM blacklist WHERE ArtistID=?', [artistid]).fetchone()
bl_artist = myDB.action('SELECT * FROM blacklist WHERE ArtistID=?',
[artistid]).fetchone()
if bl_artist or artistid in blacklisted_special_artists:
logger.info("Artist ID for '%s' is either blacklisted or Various Artists. To add artist, you must do it manually (Artist ID: %s)" % (artist, artistid))
logger.info(
"Artist ID for '%s' is either blacklisted or Various Artists. To add artist, you must do it manually (Artist ID: %s)" % (
artist, artistid))
continue
# Add to database if it doesn't exist
@@ -88,7 +91,9 @@ def artistlist_to_mbids(artistlist, forced=False):
# Just update the tracks if it does
else:
havetracks = len(myDB.select('SELECT TrackTitle from tracks WHERE ArtistID=?', [artistid])) + len(myDB.select('SELECT TrackTitle from have WHERE ArtistName like ?', [artist]))
havetracks = len(
myDB.select('SELECT TrackTitle from tracks WHERE ArtistID=?', [artistid])) + len(
myDB.select('SELECT TrackTitle from have WHERE ArtistName like ?', [artist]))
myDB.action('UPDATE artists SET HaveTracks=? WHERE ArtistID=?', [havetracks, artistid])
# Delete it from the New Artists if the request came from there
@@ -112,7 +117,6 @@ def addArtistIDListToDB(artistidlist):
def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
# Putting this here to get around the circular import. We're using this to update thumbnails for artist/albums
from headphones import cache
@@ -142,7 +146,7 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
"Status": "Loading",
"IncludeExtras": headphones.CONFIG.INCLUDE_EXTRAS,
"Extras": headphones.CONFIG.EXTRAS}
if type=="series":
if type == "series":
newValueDict['Type'] = "series"
else:
newValueDict = {"Status": "Loading"}
@@ -151,7 +155,7 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
myDB.upsert("artists", newValueDict, controlValueDict)
if type=="series":
if type == "series":
artist = mb.getSeries(artistid)
else:
artist = mb.getArtist(artistid, extrasonly)
@@ -159,7 +163,7 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
if artist and artist.get('artist_name') in blacklisted_special_artist_names:
logger.warn('Cannot import blocked special purpose artist: %s' % artist.get('artist_name'))
myDB.action('DELETE from artists WHERE ArtistID=?', [artistid])
#in case it's already in the db
# in case it's already in the db
myDB.action('DELETE from albums WHERE ArtistID=?', [artistid])
myDB.action('DELETE from tracks WHERE ArtistID=?', [artistid])
return
@@ -168,7 +172,7 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
logger.warn("Error fetching artist info. ID: " + artistid)
if dbartist is None:
newValueDict = {"ArtistName": "Fetch failed, try refreshing. (%s)" % (artistid),
"Status": "Active"}
"Status": "Active"}
else:
newValueDict = {"Status": "Active"}
myDB.upsert("artists", newValueDict, controlValueDict)
@@ -191,7 +195,8 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
# See if we need to grab extras. Artist specific extras take precedence
# over global option. Global options are set when adding a new artist
try:
db_artist = myDB.action('SELECT IncludeExtras, Extras from artists WHERE ArtistID=?', [artistid]).fetchone()
db_artist = myDB.action('SELECT IncludeExtras, Extras from artists WHERE ArtistID=?',
[artistid]).fetchone()
includeExtras = db_artist['IncludeExtras']
except IndexError:
includeExtras = False
@@ -206,9 +211,12 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
for groups in artist['releasegroups']:
group_list.append(groups['id'])
if not extrasonly:
remove_missing_groups_from_albums = myDB.select("SELECT AlbumID FROM albums WHERE ArtistID=?", [artistid])
remove_missing_groups_from_albums = myDB.select(
"SELECT AlbumID FROM albums WHERE ArtistID=?", [artistid])
else:
remove_missing_groups_from_albums = myDB.select('SELECT AlbumID FROM albums WHERE ArtistID=? AND Status="Skipped" AND Type!="Album"', [artistid])
remove_missing_groups_from_albums = myDB.select(
'SELECT AlbumID FROM albums WHERE ArtistID=? AND Status="Skipped" AND Type!="Album"',
[artistid])
for items in remove_missing_groups_from_albums:
if items['AlbumID'] not in group_list:
# Remove all from albums/tracks that aren't in release groups
@@ -217,12 +225,16 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
myDB.action("DELETE FROM tracks WHERE AlbumID=?", [items['AlbumID']])
myDB.action("DELETE FROM alltracks WHERE AlbumID=?", [items['AlbumID']])
myDB.action('DELETE from releases WHERE ReleaseGroupID=?', [items['AlbumID']])
logger.info("[%s] Removing all references to release group %s to reflect MusicBrainz refresh" % (artist['artist_name'], items['AlbumID']))
logger.info(
"[%s] Removing all references to release group %s to reflect MusicBrainz refresh" % (
artist['artist_name'], items['AlbumID']))
if not extrasonly:
force_repackage = 1
else:
if not extrasonly:
logger.info("[%s] There was either an error pulling data from MusicBrainz or there might not be any releases for this category" % artist['artist_name'])
logger.info(
"[%s] There was either an error pulling data from MusicBrainz or there might not be any releases for this category" %
artist['artist_name'])
# Then search for releases within releasegroups, if releases don't exist, then remove from allalbums/alltracks
album_searches = []
@@ -232,7 +244,7 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
today = helpers.today()
rgid = rg['id']
skip_log = 0
#Make a user configurable variable to skip update of albums with release dates older than this date (in days)
# Make a user configurable variable to skip update of albums with release dates older than this date (in days)
pause_delta = headphones.CONFIG.MB_IGNORE_AGE
rg_exists = myDB.action("SELECT * from albums WHERE AlbumID=?", [rg['id']]).fetchone()
@@ -247,12 +259,14 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
new_release_group = True
if new_release_group:
logger.info("[%s] Now adding: %s (New Release Group)" % (artist['artist_name'], rg['title']))
logger.info("[%s] Now adding: %s (New Release Group)" % (
artist['artist_name'], rg['title']))
new_releases = mb.get_new_releases(rgid, includeExtras)
else:
if check_release_date is None or check_release_date == u"None":
logger.info("[%s] Now updating: %s (No Release Date)" % (artist['artist_name'], rg['title']))
logger.info("[%s] Now updating: %s (No Release Date)" % (
artist['artist_name'], rg['title']))
new_releases = mb.get_new_releases(rgid, includeExtras, True)
else:
if len(check_release_date) == 10:
@@ -264,20 +278,24 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
else:
release_date = today
if helpers.get_age(today) - helpers.get_age(release_date) < pause_delta:
logger.info("[%s] Now updating: %s (Release Date <%s Days)", artist['artist_name'], rg['title'], pause_delta)
logger.info("[%s] Now updating: %s (Release Date <%s Days)",
artist['artist_name'], rg['title'], pause_delta)
new_releases = mb.get_new_releases(rgid, includeExtras, True)
else:
logger.info("[%s] Skipping: %s (Release Date >%s Days)", artist['artist_name'], rg['title'], pause_delta)
logger.info("[%s] Skipping: %s (Release Date >%s Days)",
artist['artist_name'], rg['title'], pause_delta)
skip_log = 1
new_releases = 0
if force_repackage == 1:
new_releases = -1
logger.info('[%s] Forcing repackage of %s (Release Group Removed)', artist['artist_name'], al_title)
logger.info('[%s] Forcing repackage of %s (Release Group Removed)',
artist['artist_name'], al_title)
else:
new_releases = new_releases
else:
logger.info("[%s] Now adding/updating: %s (Comprehensive Force)", artist['artist_name'], rg['title'])
logger.info("[%s] Now adding/updating: %s (Comprehensive Force)", artist['artist_name'],
rg['title'])
new_releases = mb.get_new_releases(rgid, includeExtras, forcefull)
if new_releases != 0:
@@ -291,23 +309,26 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
# This will be used later to build a hybrid release
fullreleaselist = []
# Search for releases within a release group
find_hybrid_releases = myDB.action("SELECT * from allalbums WHERE AlbumID=?", [rg['id']])
find_hybrid_releases = myDB.action("SELECT * from allalbums WHERE AlbumID=?",
[rg['id']])
# Build the dictionary for the fullreleaselist
for items in find_hybrid_releases:
if items['ReleaseID'] != rg['id']: #don't include hybrid information, since that's what we're replacing
if items['ReleaseID'] != rg[
'id']: # don't include hybrid information, since that's what we're replacing
hybrid_release_id = items['ReleaseID']
newValueDict = {"ArtistID": items['ArtistID'],
"ArtistName": items['ArtistName'],
"AlbumTitle": items['AlbumTitle'],
"AlbumID": items['AlbumID'],
"AlbumASIN": items['AlbumASIN'],
"ReleaseDate": items['ReleaseDate'],
"Type": items['Type'],
"ReleaseCountry": items['ReleaseCountry'],
"ReleaseFormat": items['ReleaseFormat']
}
find_hybrid_tracks = myDB.action("SELECT * from alltracks WHERE ReleaseID=?", [hybrid_release_id])
"ArtistName": items['ArtistName'],
"AlbumTitle": items['AlbumTitle'],
"AlbumID": items['AlbumID'],
"AlbumASIN": items['AlbumASIN'],
"ReleaseDate": items['ReleaseDate'],
"Type": items['Type'],
"ReleaseCountry": items['ReleaseCountry'],
"ReleaseFormat": items['ReleaseFormat']
}
find_hybrid_tracks = myDB.action("SELECT * from alltracks WHERE ReleaseID=?",
[hybrid_release_id])
totalTracks = 1
hybrid_track_array = []
for hybrid_tracks in find_hybrid_tracks:
@@ -315,9 +336,9 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
'number': hybrid_tracks['TrackNumber'],
'title': hybrid_tracks['TrackTitle'],
'id': hybrid_tracks['TrackID'],
#'url': hybrid_tracks['TrackURL'],
# 'url': hybrid_tracks['TrackURL'],
'duration': hybrid_tracks['TrackDuration']
})
})
totalTracks += 1
newValueDict['ReleaseID'] = hybrid_release_id
newValueDict['Tracks'] = hybrid_track_array
@@ -327,10 +348,12 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
# This may end up being called with an empty fullreleaselist
try:
hybridrelease = getHybridRelease(fullreleaselist)
logger.info('[%s] Packaging %s releases into hybrid title' % (artist['artist_name'], rg['title']))
logger.info('[%s] Packaging %s releases into hybrid title' % (
artist['artist_name'], rg['title']))
except Exception as e:
errors = True
logger.warn('[%s] Unable to get hybrid release information for %s: %s' % (artist['artist_name'], rg['title'], e))
logger.warn('[%s] Unable to get hybrid release information for %s: %s' % (
artist['artist_name'], rg['title'], e))
continue
# Use the ReleaseGroupID as the ReleaseID for the hybrid release to differentiate it
@@ -345,13 +368,14 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
"AlbumASIN": hybridrelease['AlbumASIN'],
"ReleaseDate": hybridrelease['ReleaseDate'],
"Type": rg['type']
}
}
myDB.upsert("allalbums", newValueDict, controlValueDict)
for track in hybridrelease['Tracks']:
cleanname = helpers.cleanName(artist['artist_name'] + ' ' + rg['title'] + ' ' + track['title'])
cleanname = helpers.cleanName(
artist['artist_name'] + ' ' + rg['title'] + ' ' + track['title'])
controlValueDict = {"TrackID": track['id'],
"ReleaseID": rg['id']}
@@ -365,25 +389,29 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
"TrackDuration": track['duration'],
"TrackNumber": track['number'],
"CleanName": cleanname
}
}
match = myDB.action('SELECT Location, BitRate, Format from have WHERE CleanName=?', [cleanname]).fetchone()
match = myDB.action('SELECT Location, BitRate, Format from have WHERE CleanName=?',
[cleanname]).fetchone()
if not match:
match = myDB.action('SELECT Location, BitRate, Format from have WHERE ArtistName LIKE ? AND AlbumTitle LIKE ? AND TrackTitle LIKE ?', [artist['artist_name'], rg['title'], track['title']]).fetchone()
#if not match:
#match = myDB.action('SELECT Location, BitRate, Format from have WHERE TrackID=?', [track['id']]).fetchone()
match = myDB.action(
'SELECT Location, BitRate, Format from have WHERE ArtistName LIKE ? AND AlbumTitle LIKE ? AND TrackTitle LIKE ?',
[artist['artist_name'], rg['title'], track['title']]).fetchone()
# if not match:
# match = myDB.action('SELECT Location, BitRate, Format from have WHERE TrackID=?', [track['id']]).fetchone()
if match:
newValueDict['Location'] = match['Location']
newValueDict['BitRate'] = match['BitRate']
newValueDict['Format'] = match['Format']
#myDB.action('UPDATE have SET Matched="True" WHERE Location=?', [match['Location']])
myDB.action('UPDATE have SET Matched=? WHERE Location=?', (rg['id'], match['Location']))
# myDB.action('UPDATE have SET Matched="True" WHERE Location=?', [match['Location']])
myDB.action('UPDATE have SET Matched=? WHERE Location=?',
(rg['id'], match['Location']))
myDB.upsert("alltracks", newValueDict, controlValueDict)
# Delete matched tracks from the have table
#myDB.action('DELETE from have WHERE Matched="True"')
# myDB.action('DELETE from have WHERE Matched="True"')
# If there's no release in the main albums tables, add the default (hybrid)
# If there is a release, check the ReleaseID against the AlbumID to see if they differ (user updated)
@@ -408,7 +436,7 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
"Type": album['Type'],
"ReleaseCountry": album['ReleaseCountry'],
"ReleaseFormat": album['ReleaseFormat']
}
}
if rg_exists:
newValueDict['DateAdded'] = rg_exists['DateAdded']
@@ -425,14 +453,17 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
newValueDict['Status'] = "Wanted"
# Sometimes "new" albums are added to musicbrainz after their release date, so let's try to catch these
# The first test just makes sure we have year-month-day
elif helpers.get_age(album['ReleaseDate']) and helpers.get_age(today) - helpers.get_age(album['ReleaseDate']) < 21 and headphones.CONFIG.AUTOWANT_UPCOMING:
elif helpers.get_age(album['ReleaseDate']) and helpers.get_age(
today) - helpers.get_age(
album['ReleaseDate']) < 21 and headphones.CONFIG.AUTOWANT_UPCOMING:
newValueDict['Status'] = "Wanted"
else:
newValueDict['Status'] = "Skipped"
myDB.upsert("albums", newValueDict, controlValueDict)
tracks = myDB.action('SELECT * from alltracks WHERE ReleaseID=?', [releaseid]).fetchall()
tracks = myDB.action('SELECT * from alltracks WHERE ReleaseID=?',
[releaseid]).fetchall()
# This is used to see how many tracks you have from an album - to
# mark it as downloaded. Default is 80%, can be set in config as
@@ -441,7 +472,7 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
if total_track_count == 0:
logger.warning("Total track count is zero for Release ID " +
"'%s', skipping.", releaseid)
"'%s', skipping.", releaseid)
continue
for track in tracks:
@@ -449,35 +480,43 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
"AlbumID": rg['id']}
newValueDict = {"ArtistID": track['ArtistID'],
"ArtistName": track['ArtistName'],
"AlbumTitle": track['AlbumTitle'],
"AlbumASIN": track['AlbumASIN'],
"ReleaseID": track['ReleaseID'],
"TrackTitle": track['TrackTitle'],
"TrackDuration": track['TrackDuration'],
"TrackNumber": track['TrackNumber'],
"CleanName": track['CleanName'],
"Location": track['Location'],
"Format": track['Format'],
"BitRate": track['BitRate']
}
"ArtistName": track['ArtistName'],
"AlbumTitle": track['AlbumTitle'],
"AlbumASIN": track['AlbumASIN'],
"ReleaseID": track['ReleaseID'],
"TrackTitle": track['TrackTitle'],
"TrackDuration": track['TrackDuration'],
"TrackNumber": track['TrackNumber'],
"CleanName": track['CleanName'],
"Location": track['Location'],
"Format": track['Format'],
"BitRate": track['BitRate']
}
myDB.upsert("tracks", newValueDict, controlValueDict)
# Mark albums as downloaded if they have at least 80% (by default, configurable) of the album
have_track_count = len(myDB.select('SELECT * from tracks WHERE AlbumID=? AND Location IS NOT NULL', [rg['id']]))
have_track_count = len(
myDB.select('SELECT * from tracks WHERE AlbumID=? AND Location IS NOT NULL',
[rg['id']]))
marked_as_downloaded = False
if rg_exists:
if rg_exists['Status'] == 'Skipped' and ((have_track_count / float(total_track_count)) >= (headphones.CONFIG.ALBUM_COMPLETION_PCT / 100.0)):
myDB.action('UPDATE albums SET Status=? WHERE AlbumID=?', ['Downloaded', rg['id']])
if rg_exists['Status'] == 'Skipped' and (
(have_track_count / float(total_track_count)) >= (
headphones.CONFIG.ALBUM_COMPLETION_PCT / 100.0)):
myDB.action('UPDATE albums SET Status=? WHERE AlbumID=?',
['Downloaded', rg['id']])
marked_as_downloaded = True
else:
if (have_track_count / float(total_track_count)) >= (headphones.CONFIG.ALBUM_COMPLETION_PCT / 100.0):
myDB.action('UPDATE albums SET Status=? WHERE AlbumID=?', ['Downloaded', rg['id']])
if (have_track_count / float(total_track_count)) >= (
headphones.CONFIG.ALBUM_COMPLETION_PCT / 100.0):
myDB.action('UPDATE albums SET Status=? WHERE AlbumID=?',
['Downloaded', rg['id']])
marked_as_downloaded = True
logger.info(u"[%s] Seeing if we need album art for %s" % (artist['artist_name'], rg['title']))
logger.info(
u"[%s] Seeing if we need album art for %s" % (artist['artist_name'], rg['title']))
cache.getThumb(AlbumID=rg['id'])
# Start a search for the album if it's new, hasn't been marked as
@@ -487,7 +526,8 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
album_searches.append(rg['id'])
else:
if skip_log == 0:
logger.info(u"[%s] No new releases, so no changes made to %s" % (artist['artist_name'], rg['title']))
logger.info(u"[%s] No new releases, so no changes made to %s" % (
artist['artist_name'], rg['title']))
time.sleep(3)
finalize_update(artistid, artist['artist_name'], errors)
@@ -499,7 +539,9 @@ def addArtisttoDB(artistid, extrasonly=False, forcefull=False, type="artist"):
metacritic.update(artistid, artist['artist_name'], artist['releasegroups'])
if errors:
logger.info("[%s] Finished updating artist: %s but with errors, so not marking it as updated in the database" % (artist['artist_name'], artist['artist_name']))
logger.info(
"[%s] Finished updating artist: %s but with errors, so not marking it as updated in the database" % (
artist['artist_name'], artist['artist_name']))
else:
myDB.action('DELETE FROM newartists WHERE ArtistName = ?', [artist['artist_name']])
logger.info(u"Updating complete for: %s" % artist['artist_name'])
@@ -518,10 +560,18 @@ def finalize_update(artistid, artistname, errors=False):
myDB = db.DBConnection()
latestalbum = myDB.action('SELECT AlbumTitle, ReleaseDate, AlbumID from albums WHERE ArtistID=? order by ReleaseDate DESC', [artistid]).fetchone()
totaltracks = len(myDB.select('SELECT TrackTitle from tracks WHERE ArtistID=? AND AlbumID IN (SELECT AlbumID FROM albums WHERE Status != "Ignored")', [artistid]))
#havetracks = len(myDB.select('SELECT TrackTitle from tracks WHERE ArtistID=? AND Location IS NOT NULL', [artistid])) + len(myDB.select('SELECT TrackTitle from have WHERE ArtistName like ?', [artist['artist_name']]))
havetracks = len(myDB.select('SELECT TrackTitle from tracks WHERE ArtistID=? AND Location IS NOT NULL', [artistid])) + len(myDB.select('SELECT TrackTitle from have WHERE ArtistName like ? AND Matched = "Failed"', [artistname]))
latestalbum = myDB.action(
'SELECT AlbumTitle, ReleaseDate, AlbumID from albums WHERE ArtistID=? order by ReleaseDate DESC',
[artistid]).fetchone()
totaltracks = len(myDB.select(
'SELECT TrackTitle from tracks WHERE ArtistID=? AND AlbumID IN (SELECT AlbumID FROM albums WHERE Status != "Ignored")',
[artistid]))
# havetracks = len(myDB.select('SELECT TrackTitle from tracks WHERE ArtistID=? AND Location IS NOT NULL', [artistid])) + len(myDB.select('SELECT TrackTitle from have WHERE ArtistName like ?', [artist['artist_name']]))
havetracks = len(
myDB.select('SELECT TrackTitle from tracks WHERE ArtistID=? AND Location IS NOT NULL',
[artistid])) + len(
myDB.select('SELECT TrackTitle from have WHERE ArtistName like ? AND Matched = "Failed"',
[artistname]))
controlValueDict = {"ArtistID": artistid}
@@ -544,7 +594,6 @@ def finalize_update(artistid, artistname, errors=False):
def addReleaseById(rid, rgid=None):
myDB = db.DBConnection()
# Create minimum info upfront if added from searchresults
@@ -563,14 +612,18 @@ def addReleaseById(rid, rgid=None):
rgid = None
artistid = None
release_dict = None
results = myDB.select("SELECT albums.ArtistID, releases.ReleaseGroupID from releases, albums WHERE releases.ReleaseID=? and releases.ReleaseGroupID=albums.AlbumID LIMIT 1", [rid])
results = myDB.select(
"SELECT albums.ArtistID, releases.ReleaseGroupID from releases, albums WHERE releases.ReleaseID=? and releases.ReleaseGroupID=albums.AlbumID LIMIT 1",
[rid])
for result in results:
rgid = result['ReleaseGroupID']
artistid = result['ArtistID']
logger.debug("Found a cached releaseid : releasegroupid relationship: " + rid + " : " + rgid)
logger.debug(
"Found a cached releaseid : releasegroupid relationship: " + rid + " : " + rgid)
if not rgid:
#didn't find it in the cache, get the information from MB
logger.debug("Didn't find releaseID " + rid + " in the cache. Looking up its ReleaseGroupID")
# didn't find it in the cache, get the information from MB
logger.debug(
"Didn't find releaseID " + rid + " in the cache. Looking up its ReleaseGroupID")
try:
release_dict = mb.getRelease(rid)
except Exception as e:
@@ -587,10 +640,10 @@ def addReleaseById(rid, rgid=None):
rgid = release_dict['rgid']
artistid = release_dict['artist_id']
#we don't want to make more calls to MB here unless we have to, could be happening quite a lot
# we don't want to make more calls to MB here unless we have to, could be happening quite a lot
rg_exists = myDB.select("SELECT * from albums WHERE AlbumID=?", [rgid])
#make sure the artist exists since I don't know what happens later if it doesn't
# make sure the artist exists since I don't know what happens later if it doesn't
artist_exists = myDB.select("SELECT * from artists WHERE ArtistID=?", [artistid])
if not artist_exists and release_dict:
@@ -599,7 +652,8 @@ def addReleaseById(rid, rgid=None):
else:
sortname = release_dict['artist_name']
logger.info(u"Now manually adding: " + release_dict['artist_name'] + " - with status Paused")
logger.info(
u"Now manually adding: " + release_dict['artist_name'] + " - with status Paused")
controlValueDict = {"ArtistID": release_dict['artist_id']}
newValueDict = {"ArtistName": release_dict['artist_name'],
"ArtistSortName": sortname,
@@ -624,13 +678,14 @@ def addReleaseById(rid, rgid=None):
myDB.upsert("artists", newValueDict, controlValueDict)
elif not artist_exists and not release_dict:
logger.error("Artist does not exist in the database and did not get a valid response from MB. Skipping release.")
logger.error(
"Artist does not exist in the database and did not get a valid response from MB. Skipping release.")
if status == 'Loading':
myDB.action("DELETE FROM albums WHERE AlbumID=?", [rgid])
return
if not rg_exists and release_dict or status == 'Loading' and release_dict: #it should never be the case that we have an rg and not the artist
#but if it is this will fail
if not rg_exists and release_dict or status == 'Loading' and release_dict: # it should never be the case that we have an rg and not the artist
# but if it is this will fail
logger.info(u"Now adding-by-id album (" + release_dict['title'] + ") from id: " + rgid)
controlValueDict = {"AlbumID": rgid}
if status != 'Loading':
@@ -639,7 +694,8 @@ def addReleaseById(rid, rgid=None):
newValueDict = {"ArtistID": release_dict['artist_id'],
"ReleaseID": rgid,
"ArtistName": release_dict['artist_name'],
"AlbumTitle": release_dict['title'] if 'title' in release_dict else release_dict['rg_title'],
"AlbumTitle": release_dict['title'] if 'title' in release_dict else
release_dict['rg_title'],
"AlbumASIN": release_dict['asin'],
"ReleaseDate": release_dict['date'],
"DateAdded": helpers.today(),
@@ -650,41 +706,48 @@ def addReleaseById(rid, rgid=None):
myDB.upsert("albums", newValueDict, controlValueDict)
#keep a local cache of these so that external programs that are adding releasesByID don't hammer MB
# keep a local cache of these so that external programs that are adding releasesByID don't hammer MB
myDB.action('INSERT INTO releases VALUES( ?, ?)', [rid, release_dict['rgid']])
for track in release_dict['tracks']:
cleanname = helpers.cleanName(release_dict['artist_name'] + ' ' + release_dict['rg_title'] + ' ' + track['title'])
cleanname = helpers.cleanName(
release_dict['artist_name'] + ' ' + release_dict['rg_title'] + ' ' + track['title'])
controlValueDict = {"TrackID": track['id'],
"AlbumID": rgid}
newValueDict = {"ArtistID": release_dict['artist_id'],
"ArtistName": release_dict['artist_name'],
"AlbumTitle": release_dict['rg_title'],
"AlbumASIN": release_dict['asin'],
"TrackTitle": track['title'],
"TrackDuration": track['duration'],
"TrackNumber": track['number'],
"CleanName": cleanname
}
"ArtistName": release_dict['artist_name'],
"AlbumTitle": release_dict['rg_title'],
"AlbumASIN": release_dict['asin'],
"TrackTitle": track['title'],
"TrackDuration": track['duration'],
"TrackNumber": track['number'],
"CleanName": cleanname
}
match = myDB.action('SELECT Location, BitRate, Format, Matched from have WHERE CleanName=?', [cleanname]).fetchone()
match = myDB.action(
'SELECT Location, BitRate, Format, Matched from have WHERE CleanName=?',
[cleanname]).fetchone()
if not match:
match = myDB.action('SELECT Location, BitRate, Format, Matched from have WHERE ArtistName LIKE ? AND AlbumTitle LIKE ? AND TrackTitle LIKE ?', [release_dict['artist_name'], release_dict['rg_title'], track['title']]).fetchone()
match = myDB.action(
'SELECT Location, BitRate, Format, Matched from have WHERE ArtistName LIKE ? AND AlbumTitle LIKE ? AND TrackTitle LIKE ?',
[release_dict['artist_name'], release_dict['rg_title'],
track['title']]).fetchone()
#if not match:
#match = myDB.action('SELECT Location, BitRate, Format from have WHERE TrackID=?', [track['id']]).fetchone()
# if not match:
# match = myDB.action('SELECT Location, BitRate, Format from have WHERE TrackID=?', [track['id']]).fetchone()
if match:
newValueDict['Location'] = match['Location']
newValueDict['BitRate'] = match['BitRate']
newValueDict['Format'] = match['Format']
#myDB.action('DELETE from have WHERE Location=?', [match['Location']])
# myDB.action('DELETE from have WHERE Location=?', [match['Location']])
# If the album has been scanned before adding the release it will be unmatched, update to matched
if match['Matched'] == 'Failed':
myDB.action('UPDATE have SET Matched=? WHERE Location=?', (release_dict['rgid'], match['Location']))
myDB.action('UPDATE have SET Matched=? WHERE Location=?',
(release_dict['rgid'], match['Location']))
myDB.upsert("tracks", newValueDict, controlValueDict)
@@ -703,7 +766,8 @@ def addReleaseById(rid, rgid=None):
searcher.searchforalbum(rgid, False)
elif not rg_exists and not release_dict:
logger.error("ReleaseGroup does not exist in the database and did not get a valid response from MB. Skipping release.")
logger.error(
"ReleaseGroup does not exist in the database and did not get a valid response from MB. Skipping release.")
if status == 'Loading':
myDB.action("DELETE FROM albums WHERE AlbumID=?", [rgid])
return
@@ -811,11 +875,13 @@ def getHybridRelease(fullreleaselist):
sortable_release_list.sort(key=lambda x: getSortableReleaseDate(x['releasedate']))
average_tracks = sum(x['trackscount'] for x in sortable_release_list) / float(len(sortable_release_list))
average_tracks = sum(x['trackscount'] for x in sortable_release_list) / float(
len(sortable_release_list))
for item in sortable_release_list:
item['trackscount_delta'] = abs(average_tracks - item['trackscount'])
a = helpers.multikeysort(sortable_release_list, ['-hasasin', 'country', 'format', 'trackscount_delta'])
a = helpers.multikeysort(sortable_release_list,
['-hasasin', 'country', 'format', 'trackscount_delta'])
release_dict = {'ReleaseDate': sortable_release_list[0]['releasedate'],
'Tracks': a[0]['tracks'],

View File

@@ -14,15 +14,14 @@
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
import random
import headphones
import headphones.lock
from headphones import db, logger, request
from collections import defaultdict
TIMEOUT = 60.0 # seconds
REQUEST_LIMIT = 1.0 / 5 # seconds
import headphones
import headphones.lock
from headphones import db, logger, request
TIMEOUT = 60.0 # seconds
REQUEST_LIMIT = 1.0 / 5 # seconds
ENTRY_POINT = "http://ws.audioscrobbler.com/2.0/"
API_KEY = "395e6ec6bb557382fc41fde867bce66f"

View File

@@ -14,17 +14,17 @@
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
import os
import math
import headphones
from beets.mediafile import MediaFile, FileTypeError, UnreadableFileError
from headphones import db, logger, helpers, importer, lastfm
# You can scan a single directory and append it to the current library by
# specifying append=True, ArtistID and ArtistName.
def libraryScan(dir=None, append=False, ArtistID=None, ArtistName=None,
cron=False, artistScan=False):
cron=False, artistScan=False):
if cron and not headphones.CONFIG.LIBRARYSCAN:
return
@@ -40,7 +40,8 @@ def libraryScan(dir=None, append=False, ArtistID=None, ArtistName=None,
dir = dir.encode(headphones.SYS_ENCODING)
if not os.path.isdir(dir):
logger.warn('Cannot find directory: %s. Not scanning' % dir.decode(headphones.SYS_ENCODING, 'replace'))
logger.warn('Cannot find directory: %s. Not scanning' % dir.decode(headphones.SYS_ENCODING,
'replace'))
return
myDB = db.DBConnection()
@@ -50,13 +51,16 @@ def libraryScan(dir=None, append=False, ArtistID=None, ArtistName=None,
if not append:
# Clean up bad filepaths
tracks = myDB.select('SELECT Location from alltracks WHERE Location IS NOT NULL UNION SELECT Location from tracks WHERE Location IS NOT NULL')
tracks = myDB.select(
'SELECT Location from alltracks WHERE Location IS NOT NULL UNION SELECT Location from tracks WHERE Location IS NOT NULL')
for track in tracks:
encoded_track_string = track['Location'].encode(headphones.SYS_ENCODING, 'replace')
if not os.path.isfile(encoded_track_string):
myDB.action('UPDATE tracks SET Location=?, BitRate=?, Format=? WHERE Location=?', [None, None, None, track['Location']])
myDB.action('UPDATE alltracks SET Location=?, BitRate=?, Format=? WHERE Location=?', [None, None, None, track['Location']])
myDB.action('UPDATE tracks SET Location=?, BitRate=?, Format=? WHERE Location=?',
[None, None, None, track['Location']])
myDB.action('UPDATE alltracks SET Location=?, BitRate=?, Format=? WHERE Location=?',
[None, None, None, track['Location']])
del_have_tracks = myDB.select('SELECT Location, Matched, ArtistName from have')
@@ -67,7 +71,9 @@ def libraryScan(dir=None, append=False, ArtistID=None, ArtistName=None,
# Make sure deleted files get accounted for when updating artist track counts
new_artists.append(track['ArtistName'])
myDB.action('DELETE FROM have WHERE Location=?', [track['Location']])
logger.info('File %s removed from Headphones, as it is no longer on disk' % encoded_track_string.decode(headphones.SYS_ENCODING, 'replace'))
logger.info(
'File %s removed from Headphones, as it is no longer on disk' % encoded_track_string.decode(
headphones.SYS_ENCODING, 'replace'))
bitrates = []
song_list = []
@@ -89,9 +95,14 @@ def libraryScan(dir=None, append=False, ArtistID=None, ArtistName=None,
latest_subdirectory.append(subdirectory)
if file_count == 0 and r.replace(dir, '') != '':
logger.info("[%s] Now scanning subdirectory %s" % (dir.decode(headphones.SYS_ENCODING, 'replace'), subdirectory.decode(headphones.SYS_ENCODING, 'replace')))
elif latest_subdirectory[file_count] != latest_subdirectory[file_count - 1] and file_count != 0:
logger.info("[%s] Now scanning subdirectory %s" % (dir.decode(headphones.SYS_ENCODING, 'replace'), subdirectory.decode(headphones.SYS_ENCODING, 'replace')))
logger.info("[%s] Now scanning subdirectory %s" % (
dir.decode(headphones.SYS_ENCODING, 'replace'),
subdirectory.decode(headphones.SYS_ENCODING, 'replace')))
elif latest_subdirectory[file_count] != latest_subdirectory[
file_count - 1] and file_count != 0:
logger.info("[%s] Now scanning subdirectory %s" % (
dir.decode(headphones.SYS_ENCODING, 'replace'),
subdirectory.decode(headphones.SYS_ENCODING, 'replace')))
song = os.path.join(r, files)
@@ -102,10 +113,13 @@ def libraryScan(dir=None, append=False, ArtistID=None, ArtistName=None,
try:
f = MediaFile(song)
except (FileTypeError, UnreadableFileError):
logger.warning("Cannot read media file '%s', skipping. It may be corrupted or not a media file.", unicode_song_path)
logger.warning(
"Cannot read media file '%s', skipping. It may be corrupted or not a media file.",
unicode_song_path)
continue
except IOError:
logger.warning("Cannnot read media file '%s', skipping. Does the file exists?", unicode_song_path)
logger.warning("Cannnot read media file '%s', skipping. Does the file exists?",
unicode_song_path)
continue
# Grab the bitrates for the auto detect bit rate option
@@ -131,45 +145,52 @@ def libraryScan(dir=None, append=False, ArtistID=None, ArtistName=None,
controlValueDict = {'Location': unicode_song_path}
newValueDict = {'TrackID': f.mb_trackid,
#'ReleaseID' : f.mb_albumid,
'ArtistName': f_artist,
'AlbumTitle': f.album,
'TrackNumber': f.track,
'TrackLength': f.length,
'Genre': f.genre,
'Date': f.date,
'TrackTitle': f.title,
'BitRate': f.bitrate,
'Format': f.format,
'CleanName': CleanName
}
# 'ReleaseID' : f.mb_albumid,
'ArtistName': f_artist,
'AlbumTitle': f.album,
'TrackNumber': f.track,
'TrackLength': f.length,
'Genre': f.genre,
'Date': f.date,
'TrackTitle': f.title,
'BitRate': f.bitrate,
'Format': f.format,
'CleanName': CleanName
}
#song_list.append(song_dict)
check_exist_song = myDB.action("SELECT * FROM have WHERE Location=?", [unicode_song_path]).fetchone()
#Only attempt to match songs that are new, haven't yet been matched, or metadata has changed.
# song_list.append(song_dict)
check_exist_song = myDB.action("SELECT * FROM have WHERE Location=?",
[unicode_song_path]).fetchone()
# Only attempt to match songs that are new, haven't yet been matched, or metadata has changed.
if not check_exist_song:
#This is a new track
# This is a new track
if f_artist:
new_artists.append(f_artist)
myDB.upsert("have", newValueDict, controlValueDict)
new_song_count += 1
else:
if check_exist_song['ArtistName'] != f_artist or check_exist_song['AlbumTitle'] != f.album or check_exist_song['TrackTitle'] != f.title:
#Important track metadata has been modified, need to run matcher again
if check_exist_song['ArtistName'] != f_artist or check_exist_song[
'AlbumTitle'] != f.album or check_exist_song['TrackTitle'] != f.title:
# Important track metadata has been modified, need to run matcher again
if f_artist and f_artist != check_exist_song['ArtistName']:
new_artists.append(f_artist)
elif f_artist and f_artist == check_exist_song['ArtistName'] and check_exist_song['Matched'] != "Ignored":
elif f_artist and f_artist == check_exist_song['ArtistName'] and \
check_exist_song['Matched'] != "Ignored":
new_artists.append(f_artist)
else:
continue
newValueDict['Matched'] = None
myDB.upsert("have", newValueDict, controlValueDict)
myDB.action('UPDATE tracks SET Location=?, BitRate=?, Format=? WHERE Location=?', [None, None, None, unicode_song_path])
myDB.action('UPDATE alltracks SET Location=?, BitRate=?, Format=? WHERE Location=?', [None, None, None, unicode_song_path])
myDB.action(
'UPDATE tracks SET Location=?, BitRate=?, Format=? WHERE Location=?',
[None, None, None, unicode_song_path])
myDB.action(
'UPDATE alltracks SET Location=?, BitRate=?, Format=? WHERE Location=?',
[None, None, None, unicode_song_path])
new_song_count += 1
else:
#This track information hasn't changed
# This track information hasn't changed
if f_artist and check_exist_song['Matched'] != "Ignored":
new_artists.append(f_artist)
@@ -177,9 +198,13 @@ def libraryScan(dir=None, append=False, ArtistID=None, ArtistName=None,
# Now we start track matching
logger.info("%s new/modified songs found and added to the database" % new_song_count)
song_list = myDB.action("SELECT * FROM have WHERE Matched IS NULL AND LOCATION LIKE ?", [dir.decode(headphones.SYS_ENCODING, 'replace') + "%"])
total_number_of_songs = myDB.action("SELECT COUNT(*) FROM have WHERE Matched IS NULL AND LOCATION LIKE ?", [dir.decode(headphones.SYS_ENCODING, 'replace') + "%"]).fetchone()[0]
logger.info("Found " + str(total_number_of_songs) + " new/modified tracks in: '" + dir.decode(headphones.SYS_ENCODING, 'replace') + "'. Matching tracks to the appropriate releases....")
song_list = myDB.action("SELECT * FROM have WHERE Matched IS NULL AND LOCATION LIKE ?",
[dir.decode(headphones.SYS_ENCODING, 'replace') + "%"])
total_number_of_songs = \
myDB.action("SELECT COUNT(*) FROM have WHERE Matched IS NULL AND LOCATION LIKE ?",
[dir.decode(headphones.SYS_ENCODING, 'replace') + "%"]).fetchone()[0]
logger.info("Found " + str(total_number_of_songs) + " new/modified tracks in: '" + dir.decode(
headphones.SYS_ENCODING, 'replace') + "'. Matching tracks to the appropriate releases....")
# Sort the song_list by most vague (e.g. no trackid or releaseid) to most specific (both trackid & releaseid)
# When we insert into the database, the tracks with the most specific information will overwrite the more general matches
@@ -190,6 +215,7 @@ def libraryScan(dir=None, append=False, ArtistID=None, ArtistName=None,
# We'll use this to give a % completion, just because the track matching might take a while
song_count = 0
latest_artist = []
last_completion_percentage = 0
for song in song_list:
@@ -200,27 +226,30 @@ def libraryScan(dir=None, append=False, ArtistID=None, ArtistName=None,
logger.info("Now matching songs by %s" % song['ArtistName'])
song_count += 1
completion_percentage = float(song_count) / total_number_of_songs * 100
completion_percentage = math.floor(float(song_count) / total_number_of_songs * 1000) / 10
if completion_percentage % 10 == 0:
if completion_percentage >= (last_completion_percentage + 10):
logger.info("Track matching is " + str(completion_percentage) + "% complete")
last_completion_percentage = completion_percentage
#THE "MORE-SPECIFIC" CLAUSES HERE HAVE ALL BEEN REMOVED. WHEN RUNNING A LIBRARY SCAN, THE ONLY CLAUSES THAT
#EVER GOT HIT WERE [ARTIST/ALBUM/TRACK] OR CLEANNAME. ARTISTID & RELEASEID ARE NEVER PASSED TO THIS FUNCTION,
#ARE NEVER FOUND, AND THE OTHER CLAUSES WERE NEVER HIT. FURTHERMORE, OTHER MATCHING FUNCTIONS IN THIS PROGRAM
#(IMPORTER.PY, MB.PY) SIMPLY DO A [ARTIST/ALBUM/TRACK] OR CLEANNAME MATCH, SO IT'S ALL CONSISTENT.
# THE "MORE-SPECIFIC" CLAUSES HERE HAVE ALL BEEN REMOVED. WHEN RUNNING A LIBRARY SCAN, THE ONLY CLAUSES THAT
# EVER GOT HIT WERE [ARTIST/ALBUM/TRACK] OR CLEANNAME. ARTISTID & RELEASEID ARE NEVER PASSED TO THIS FUNCTION,
# ARE NEVER FOUND, AND THE OTHER CLAUSES WERE NEVER HIT. FURTHERMORE, OTHER MATCHING FUNCTIONS IN THIS PROGRAM
# (IMPORTER.PY, MB.PY) SIMPLY DO A [ARTIST/ALBUM/TRACK] OR CLEANNAME MATCH, SO IT'S ALL CONSISTENT.
if song['ArtistName'] and song['AlbumTitle'] and song['TrackTitle']:
track = myDB.action('SELECT ArtistName, AlbumTitle, TrackTitle, AlbumID from tracks WHERE ArtistName LIKE ? AND AlbumTitle LIKE ? AND TrackTitle LIKE ?', [song['ArtistName'], song['AlbumTitle'], song['TrackTitle']]).fetchone()
track = myDB.action(
'SELECT ArtistName, AlbumTitle, TrackTitle, AlbumID from tracks WHERE ArtistName LIKE ? AND AlbumTitle LIKE ? AND TrackTitle LIKE ?',
[song['ArtistName'], song['AlbumTitle'], song['TrackTitle']]).fetchone()
have_updated = False
if track:
controlValueDict = {'ArtistName': track['ArtistName'],
'AlbumTitle': track['AlbumTitle'],
'TrackTitle': track['TrackTitle']}
'AlbumTitle': track['AlbumTitle'],
'TrackTitle': track['TrackTitle']}
newValueDict = {'Location': song['Location'],
'BitRate': song['BitRate'],
'Format': song['Format']}
'BitRate': song['BitRate'],
'Format': song['Format']}
myDB.upsert("tracks", newValueDict, controlValueDict)
controlValueDict2 = {'Location': song['Location']}
@@ -228,12 +257,13 @@ def libraryScan(dir=None, append=False, ArtistID=None, ArtistName=None,
myDB.upsert("have", newValueDict2, controlValueDict2)
have_updated = True
else:
track = myDB.action('SELECT CleanName, AlbumID from tracks WHERE CleanName LIKE ?', [song['CleanName']]).fetchone()
track = myDB.action('SELECT CleanName, AlbumID from tracks WHERE CleanName LIKE ?',
[song['CleanName']]).fetchone()
if track:
controlValueDict = {'CleanName': track['CleanName']}
newValueDict = {'Location': song['Location'],
'BitRate': song['BitRate'],
'Format': song['Format']}
'BitRate': song['BitRate'],
'Format': song['Format']}
myDB.upsert("tracks", newValueDict, controlValueDict)
controlValueDict2 = {'Location': song['Location']}
@@ -246,26 +276,30 @@ def libraryScan(dir=None, append=False, ArtistID=None, ArtistName=None,
myDB.upsert("have", newValueDict2, controlValueDict2)
have_updated = True
alltrack = myDB.action('SELECT ArtistName, AlbumTitle, TrackTitle, AlbumID from alltracks WHERE ArtistName LIKE ? AND AlbumTitle LIKE ? AND TrackTitle LIKE ?', [song['ArtistName'], song['AlbumTitle'], song['TrackTitle']]).fetchone()
alltrack = myDB.action(
'SELECT ArtistName, AlbumTitle, TrackTitle, AlbumID from alltracks WHERE ArtistName LIKE ? AND AlbumTitle LIKE ? AND TrackTitle LIKE ?',
[song['ArtistName'], song['AlbumTitle'], song['TrackTitle']]).fetchone()
if alltrack:
controlValueDict = {'ArtistName': alltrack['ArtistName'],
'AlbumTitle': alltrack['AlbumTitle'],
'TrackTitle': alltrack['TrackTitle']}
'AlbumTitle': alltrack['AlbumTitle'],
'TrackTitle': alltrack['TrackTitle']}
newValueDict = {'Location': song['Location'],
'BitRate': song['BitRate'],
'Format': song['Format']}
'BitRate': song['BitRate'],
'Format': song['Format']}
myDB.upsert("alltracks", newValueDict, controlValueDict)
controlValueDict2 = {'Location': song['Location']}
newValueDict2 = {'Matched': alltrack['AlbumID']}
myDB.upsert("have", newValueDict2, controlValueDict2)
else:
alltrack = myDB.action('SELECT CleanName, AlbumID from alltracks WHERE CleanName LIKE ?', [song['CleanName']]).fetchone()
alltrack = myDB.action(
'SELECT CleanName, AlbumID from alltracks WHERE CleanName LIKE ?',
[song['CleanName']]).fetchone()
if alltrack:
controlValueDict = {'CleanName': alltrack['CleanName']}
newValueDict = {'Location': song['Location'],
'BitRate': song['BitRate'],
'Format': song['Format']}
'BitRate': song['BitRate'],
'Format': song['Format']}
myDB.upsert("alltracks", newValueDict, controlValueDict)
controlValueDict2 = {'Location': song['Location']}
@@ -283,9 +317,10 @@ def libraryScan(dir=None, append=False, ArtistID=None, ArtistName=None,
newValueDict2 = {'Matched': "Failed"}
myDB.upsert("have", newValueDict2, controlValueDict2)
#######myDB.action('INSERT INTO have (ArtistName, AlbumTitle, TrackNumber, TrackTitle, TrackLength, BitRate, Genre, Date, TrackID, Location, CleanName, Format) VALUES( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)', [song['ArtistName'], song['AlbumTitle'], song['TrackNumber'], song['TrackTitle'], song['TrackLength'], song['BitRate'], song['Genre'], song['Date'], song['TrackID'], song['Location'], CleanName, song['Format']])
#######myDB.action('INSERT INTO have (ArtistName, AlbumTitle, TrackNumber, TrackTitle, TrackLength, BitRate, Genre, Date, TrackID, Location, CleanName, Format) VALUES( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)', [song['ArtistName'], song['AlbumTitle'], song['TrackNumber'], song['TrackTitle'], song['TrackLength'], song['BitRate'], song['Genre'], song['Date'], song['TrackID'], song['Location'], CleanName, song['Format']])
logger.info('Completed matching tracks from directory: %s' % dir.decode(headphones.SYS_ENCODING, 'replace'))
logger.info('Completed matching tracks from directory: %s' % dir.decode(headphones.SYS_ENCODING,
'replace'))
if not append or artistScan:
logger.info('Updating scanned artist track counts')
@@ -294,29 +329,32 @@ def libraryScan(dir=None, append=False, ArtistID=None, ArtistName=None,
unique_artists = {}.fromkeys(new_artists).keys()
current_artists = myDB.select('SELECT ArtistName, ArtistID from artists')
#There was a bug where artists with special characters (-,') would show up in new artists.
# There was a bug where artists with special characters (-,') would show up in new artists.
artist_list = [
x for x in unique_artists
if helpers.cleanName(x).lower() not in [
helpers.cleanName(y[0]).lower()
for y in current_artists
]
]
]
artists_checked = [
x for x in unique_artists
if helpers.cleanName(x).lower() in [
helpers.cleanName(y[0]).lower()
for y in current_artists
]
]
]
# Update track counts
for artist in artists_checked:
# Have tracks are selected from tracks table and not all tracks because of duplicates
# We update the track count upon an album switch to compliment this
havetracks = (
len(myDB.select('SELECT TrackTitle from tracks WHERE ArtistName like ? AND Location IS NOT NULL', [artist]))
+ len(myDB.select('SELECT TrackTitle from have WHERE ArtistName like ? AND Matched = "Failed"', [artist]))
len(myDB.select(
'SELECT TrackTitle from tracks WHERE ArtistName like ? AND Location IS NOT NULL',
[artist])) + len(myDB.select(
'SELECT TrackTitle from have WHERE ArtistName like ? AND Matched = "Failed"',
[artist]))
)
# Note: some people complain about having "artist have tracks" > # of tracks total in artist official releases
# (can fix by getting rid of second len statement)
@@ -330,7 +368,7 @@ def libraryScan(dir=None, append=False, ArtistID=None, ArtistName=None,
importer.artistlist_to_mbids(artist_list)
else:
logger.info('To add these artists, go to Manage->Manage New Artists')
#myDB.action('DELETE from newartists')
# myDB.action('DELETE from newartists')
for artist in artist_list:
myDB.action('INSERT OR IGNORE INTO newartists VALUES (?)', [artist])
@@ -341,7 +379,11 @@ def libraryScan(dir=None, append=False, ArtistID=None, ArtistName=None,
# If we're appending a new album to the database, update the artists total track counts
logger.info('Updating artist track counts')
havetracks = len(myDB.select('SELECT TrackTitle from tracks WHERE ArtistID=? AND Location IS NOT NULL', [ArtistID])) + len(myDB.select('SELECT TrackTitle from have WHERE ArtistName like ? AND Matched = "Failed"', [ArtistName]))
havetracks = len(
myDB.select('SELECT TrackTitle from tracks WHERE ArtistID=? AND Location IS NOT NULL',
[ArtistID])) + len(myDB.select(
'SELECT TrackTitle from have WHERE ArtistName like ? AND Matched = "Failed"',
[ArtistName]))
myDB.action('UPDATE artists SET HaveTracks=? WHERE ArtistID=?', [havetracks, ArtistID])
if not append:
@@ -352,18 +394,21 @@ def libraryScan(dir=None, append=False, ArtistID=None, ArtistName=None,
logger.info('Library scan complete')
#ADDED THIS SECTION TO MARK ALBUMS AS DOWNLOADED IF ARTISTS ARE ADDED EN MASSE BEFORE LIBRARY IS SCANNED
# ADDED THIS SECTION TO MARK ALBUMS AS DOWNLOADED IF ARTISTS ARE ADDED EN MASSE BEFORE LIBRARY IS SCANNED
def update_album_status(AlbumID=None):
myDB = db.DBConnection()
logger.info('Counting matched tracks to mark albums as skipped/downloaded')
if AlbumID:
album_status_updater = myDB.action('SELECT AlbumID, AlbumTitle, Status from albums WHERE AlbumID=?', [AlbumID])
album_status_updater = myDB.action(
'SELECT AlbumID, AlbumTitle, Status from albums WHERE AlbumID=?', [AlbumID])
else:
album_status_updater = myDB.action('SELECT AlbumID, AlbumTitle, Status from albums')
for album in album_status_updater:
track_counter = myDB.action('SELECT Location from tracks where AlbumID=?', [album['AlbumID']])
track_counter = myDB.action('SELECT Location from tracks where AlbumID=?',
[album['AlbumID']])
total_tracks = 0
have_tracks = 0
for track in track_counter:
@@ -383,7 +428,7 @@ def update_album_status(AlbumID=None):
# I think we can only automatically change Skipped->Downloaded when updating
# There was a bug report where this was causing infinite downloads if the album was
# recent, but matched to less than 80%. It would go Downloaded->Skipped->Wanted->Downloaded->Skipped->Wanted->etc....
#else:
# else:
# if album['Status'] == "Skipped" or album['Status'] == "Downloaded":
# new_album_status = "Skipped"
# else:

View File

@@ -2,11 +2,12 @@
Locking-related classes
"""
import headphones.logger
import time
import threading
import Queue
import headphones.logger
class TimedLock(object):
"""

View File

@@ -13,24 +13,24 @@
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
from headphones import helpers
from logutils.queue import QueueHandler, QueueListener
from logging import handlers
import multiprocessing
import contextlib
import headphones
import threading
import traceback
import logging
import errno
import sys
import os
from headphones import helpers
from logutils.queue import QueueHandler, QueueListener
import headphones
# These settings are for file logging only
FILENAME = "headphones.log"
MAX_SIZE = 1000000 # 1 MB
MAX_SIZE = 1000000 # 1 MB
MAX_FILES = 5
# Headphones logger
@@ -39,6 +39,7 @@ logger = logging.getLogger("headphones")
# Global queue for multiprocessing logging
queue = None
class LogListHandler(logging.Handler):
"""
Log handler for Web UI.
@@ -71,8 +72,8 @@ def listener():
# http://stackoverflow.com/questions/2009278 for more information.
if e.errno == errno.EACCES:
logger.warning("Multiprocess logging disabled, because "
"current user cannot map shared memory. You won't see any" \
"logging generated by the worker processed.")
"current user cannot map shared memory. You won't see any" \
"logging generated by the worker processed.")
# Multiprocess logging may be disabled.
if not queue:
@@ -149,8 +150,10 @@ def initLogger(console=False, log_dir=False, verbose=False):
if log_dir:
filename = os.path.join(log_dir, FILENAME)
file_formatter = logging.Formatter('%(asctime)s - %(levelname)-7s :: %(threadName)s : %(message)s', '%d-%b-%Y %H:%M:%S')
file_handler = handlers.RotatingFileHandler(filename, maxBytes=MAX_SIZE, backupCount=MAX_FILES)
file_formatter = logging.Formatter(
'%(asctime)s - %(levelname)-7s :: %(threadName)s : %(message)s', '%d-%b-%Y %H:%M:%S')
file_handler = handlers.RotatingFileHandler(filename, maxBytes=MAX_SIZE,
backupCount=MAX_FILES)
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(file_formatter)
@@ -158,7 +161,8 @@ def initLogger(console=False, log_dir=False, verbose=False):
# Setup console logger
if console:
console_formatter = logging.Formatter('%(asctime)s - %(levelname)s :: %(threadName)s : %(message)s', '%d-%b-%Y %H:%M:%S')
console_formatter = logging.Formatter(
'%(asctime)s - %(levelname)s :: %(threadName)s : %(message)s', '%d-%b-%Y %H:%M:%S')
console_handler = logging.StreamHandler()
console_handler.setFormatter(console_formatter)
console_handler.setLevel(logging.DEBUG)
@@ -212,11 +216,13 @@ def initHooks(global_exceptions=True, thread_exceptions=True, pass_original=True
raise
except:
excepthook(*sys.exc_info())
self.run = new_run
# Monkey patch the run() by monkey patching the __init__ method
threading.Thread.__init__ = new_init
# Expose logger methods
info = logger.info
warn = logger.warn

View File

@@ -13,18 +13,17 @@
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
import re
import htmlentitydefs
import re
from headphones import logger, request
def getLyrics(artist, song):
params = {"artist": artist.encode('utf-8'),
"song": song.encode('utf-8'),
"fmt": 'xml'
}
"song": song.encode('utf-8'),
"fmt": 'xml'
}
url = 'http://lyrics.wikia.com/api.php'
data = request.request_minidom(url, params=params)
@@ -46,10 +45,13 @@ def getLyrics(artist, song):
logger.warn('Error fetching lyrics from: %s' % lyricsurl)
return
m = re.compile('''<div class='lyricbox'><div class='rtMatcher'>.*?</div>(.*?)<!--''').search(lyricspage)
m = re.compile('''<div class='lyricbox'><div class='rtMatcher'>.*?</div>(.*?)<!--''').search(
lyricspage)
if not m:
m = re.compile('''<div class='lyricbox'><span style="padding:1em"><a href="/Category:Instrumental" title="Instrumental">''').search(lyricspage)
m = re.compile(
'''<div class='lyricbox'><span style="padding:1em"><a href="/Category:Instrumental" title="Instrumental">''').search(
lyricspage)
if m:
return u'(Instrumental)'
else:
@@ -67,12 +69,12 @@ def convert_html_entities(s):
if len(matches) > 0:
hits = set(matches)
for hit in hits:
name = hit[2:-1]
try:
entnum = int(name)
s = s.replace(hit, unichr(entnum))
except ValueError:
pass
name = hit[2:-1]
try:
entnum = int(name)
s = s.replace(hit, unichr(entnum))
except ValueError:
pass
matches = re.findall("&\w+;", s)
hits = set(matches)

View File

@@ -31,12 +31,12 @@ except ImportError:
mb_lock = headphones.lock.TimedLock(0)
# Quick fix to add mirror switching on the fly. Need to probably return the mbhost & mbport that's
# being used, so we can send those values to the log
def startmb():
mbuser = None
mbpass = None
@@ -66,7 +66,7 @@ def startmb():
if sleepytime == 0:
musicbrainzngs.set_rate_limit(False)
else:
#calling it with an it ends up blocking all requests after the first
# calling it with an it ends up blocking all requests after the first
musicbrainzngs.set_rate_limit(limit_or_interval=float(sleepytime))
mb_lock.minimum_delta = sleepytime
@@ -81,59 +81,72 @@ def startmb():
if not headphones.CONFIG.CUSTOMAUTH and headphones.CONFIG.MIRROR == "custom":
musicbrainzngs.disable_hpauth()
logger.debug('Using the following server values: MBHost: %s, MBPort: %i, Sleep Interval: %i', mbhost, mbport, sleepytime)
logger.debug('Using the following server values: MBHost: %s, MBPort: %i, Sleep Interval: %i',
mbhost, mbport, sleepytime)
return True
def findArtist(name, limit=1):
artistlist = []
artistResults = None
artistlist = []
artistResults = None
chars = set('!?*-')
if any((c in chars) for c in name):
name = '"' + name + '"'
chars = set('!?*-')
if any((c in chars) for c in name):
name = '"' + name + '"'
criteria = {'artist': name.lower()}
criteria = {'artist': name.lower()}
with mb_lock:
try:
artistResults = musicbrainzngs.search_artists(limit=limit, **criteria)['artist-list']
except musicbrainzngs.WebServiceError as e:
logger.warn('Attempt to query MusicBrainz for %s failed (%s)' % (name, str(e)))
mb_lock.snooze(5)
with mb_lock:
try:
artistResults = musicbrainzngs.search_artists(limit=limit, **criteria)['artist-list']
except ValueError as e:
if "at least one query term is required" in e.message:
logger.error(
"Tried to search without a term, or an empty one. Provided artist (probably emtpy): %s",
name)
return False
else:
raise
except musicbrainzngs.WebServiceError as e:
logger.warn('Attempt to query MusicBrainz for %s failed (%s)' % (name, str(e)))
mb_lock.snooze(5)
if not artistResults:
return False
for result in artistResults:
if 'disambiguation' in result:
uniquename = unicode(result['sort-name'] + " (" + result['disambiguation'] + ")")
else:
uniquename = unicode(result['sort-name'])
if result['name'] != uniquename and limit == 1:
logger.info(
'Found an artist with a disambiguation: %s - doing an album based search' % name)
artistdict = findArtistbyAlbum(name)
if not artistdict:
logger.info(
'Cannot determine the best match from an artist/album search. Using top match instead')
artistlist.append({
# Just need the artist id if the limit is 1
# 'name': unicode(result['sort-name']),
# 'uniquename': uniquename,
'id': unicode(result['id']),
# 'url': unicode("http://musicbrainz.org/artist/" + result['id']),#probably needs to be changed
# 'score': int(result['ext:score'])
})
else:
artistlist.append(artistdict)
else:
artistlist.append({
'name': unicode(result['sort-name']),
'uniquename': uniquename,
'id': unicode(result['id']),
'url': unicode("http://musicbrainz.org/artist/" + result['id']),
# probably needs to be changed
'score': int(result['ext:score'])
})
return artistlist
if not artistResults:
return False
for result in artistResults:
if 'disambiguation' in result:
uniquename = unicode(result['sort-name'] + " (" + result['disambiguation'] + ")")
else:
uniquename = unicode(result['sort-name'])
if result['name'] != uniquename and limit == 1:
logger.info('Found an artist with a disambiguation: %s - doing an album based search' % name)
artistdict = findArtistbyAlbum(name)
if not artistdict:
logger.info('Cannot determine the best match from an artist/album search. Using top match instead')
artistlist.append({
# Just need the artist id if the limit is 1
# 'name': unicode(result['sort-name']),
# 'uniquename': uniquename,
'id': unicode(result['id']),
# 'url': unicode("http://musicbrainz.org/artist/" + result['id']),#probably needs to be changed
# 'score': int(result['ext:score'])
})
else:
artistlist.append(artistdict)
else:
artistlist.append({
'name': unicode(result['sort-name']),
'uniquename': uniquename,
'id': unicode(result['id']),
'url': unicode("http://musicbrainz.org/artist/" + result['id']),#probably needs to be changed
'score': int(result['ext:score'])
})
return artistlist
def findRelease(name, limit=1, artist=None):
releaselist = []
@@ -151,8 +164,9 @@ def findRelease(name, limit=1, artist=None):
with mb_lock:
try:
releaseResults = musicbrainzngs.search_releases(query=name, limit=limit, artist=artist)['release-list']
except musicbrainzngs.WebServiceError as e: #need to update exceptions
releaseResults = musicbrainzngs.search_releases(query=name, limit=limit, artist=artist)[
'release-list']
except musicbrainzngs.WebServiceError as e: # need to update exceptions
logger.warn('Attempt to query MusicBrainz for "%s" failed: %s' % (name, str(e)))
mb_lock.snooze(5)
@@ -196,55 +210,61 @@ def findRelease(name, limit=1, artist=None):
rg_type = secondary_type
releaselist.append({
'uniquename': unicode(result['artist-credit'][0]['artist']['name']),
'title': unicode(title),
'id': unicode(result['artist-credit'][0]['artist']['id']),
'albumid': unicode(result['id']),
'url': unicode("http://musicbrainz.org/artist/" + result['artist-credit'][0]['artist']['id']),#probably needs to be changed
'albumurl': unicode("http://musicbrainz.org/release/" + result['id']),#probably needs to be changed
'score': int(result['ext:score']),
'date': unicode(result['date']) if 'date' in result else '',
'country': unicode(result['country']) if 'country' in result else '',
'formats': unicode(formats),
'tracks': unicode(tracks),
'rgid': unicode(result['release-group']['id']),
'rgtype': unicode(rg_type)
})
'uniquename': unicode(result['artist-credit'][0]['artist']['name']),
'title': unicode(title),
'id': unicode(result['artist-credit'][0]['artist']['id']),
'albumid': unicode(result['id']),
'url': unicode(
"http://musicbrainz.org/artist/" + result['artist-credit'][0]['artist']['id']),
# probably needs to be changed
'albumurl': unicode("http://musicbrainz.org/release/" + result['id']),
# probably needs to be changed
'score': int(result['ext:score']),
'date': unicode(result['date']) if 'date' in result else '',
'country': unicode(result['country']) if 'country' in result else '',
'formats': unicode(formats),
'tracks': unicode(tracks),
'rgid': unicode(result['release-group']['id']),
'rgtype': unicode(rg_type)
})
return releaselist
def findSeries(name, limit=1):
serieslist = []
seriesResults = None
serieslist = []
seriesResults = None
chars = set('!?*-')
if any((c in chars) for c in name):
name = '"' + name + '"'
chars = set('!?*-')
if any((c in chars) for c in name):
name = '"' + name + '"'
criteria = {'series': name.lower()}
criteria = {'series': name.lower()}
with mb_lock:
try:
seriesResults = musicbrainzngs.search_series(limit=limit, **criteria)['series-list']
except musicbrainzngs.WebServiceError as e:
logger.warn('Attempt to query MusicBrainz for %s failed (%s)' % (name, str(e)))
mb_lock.snooze(5)
with mb_lock:
try:
seriesResults = musicbrainzngs.search_series(limit=limit, **criteria)['series-list']
except musicbrainzngs.WebServiceError as e:
logger.warn('Attempt to query MusicBrainz for %s failed (%s)' % (name, str(e)))
mb_lock.snooze(5)
if not seriesResults:
return False
for result in seriesResults:
if 'disambiguation' in result:
uniquename = unicode(result['name'] + " (" + result['disambiguation'] + ")")
else:
uniquename = unicode(result['name'])
serieslist.append({
'uniquename': uniquename,
'name': unicode(result['name']),
'type': unicode(result['type']),
'id': unicode(result['id']),
'url': unicode("http://musicbrainz.org/series/" + result['id']),
# probably needs to be changed
'score': int(result['ext:score'])
})
return serieslist
if not seriesResults:
return False
for result in seriesResults:
if 'disambiguation' in result:
uniquename = unicode(result['name'] + " (" + result['disambiguation'] + ")")
else:
uniquename = unicode(result['name'])
serieslist.append({
'uniquename': uniquename,
'name': unicode(result['name']),
'type': unicode(result['type']),
'id': unicode(result['id']),
'url': unicode("http://musicbrainz.org/series/" + result['id']),#probably needs to be changed
'score': int(result['ext:score'])
})
return serieslist
def getArtist(artistid, extrasonly=False):
artist_dict = {}
@@ -265,7 +285,9 @@ def getArtist(artistid, extrasonly=False):
newRgs = newRgs['release-group-list']
artist['release-group-list'] += newRgs
except musicbrainzngs.WebServiceError as e:
logger.warn('Attempt to retrieve artist information from MusicBrainz failed for artistid: %s (%s)' % (artistid, str(e)))
logger.warn(
'Attempt to retrieve artist information from MusicBrainz failed for artistid: %s (%s)' % (
artistid, str(e)))
mb_lock.snooze(5)
except Exception as e:
pass
@@ -279,7 +301,7 @@ def getArtist(artistid, extrasonly=False):
if not extrasonly:
for rg in artist['release-group-list']:
if "secondary-type-list" in rg.keys(): #only add releases without a secondary type
if "secondary-type-list" in rg.keys(): # only add releases without a secondary type
continue
releasegroups.append({
'title': unicode(rg['title']),
@@ -293,7 +315,8 @@ def getArtist(artistid, extrasonly=False):
myDB = db.DBConnection()
try:
db_artist = myDB.action('SELECT IncludeExtras, Extras from artists WHERE ArtistID=?', [artistid]).fetchone()
db_artist = myDB.action('SELECT IncludeExtras, Extras from artists WHERE ArtistID=?',
[artistid]).fetchone()
includeExtras = db_artist['IncludeExtras']
except IndexError:
includeExtras = False
@@ -329,7 +352,9 @@ def getArtist(artistid, extrasonly=False):
newRgs = newRgs['release-group-list']
mb_extras_list += newRgs
except musicbrainzngs.WebServiceError as e:
logger.warn('Attempt to retrieve artist information from MusicBrainz failed for artistid: %s (%s)' % (artistid, str(e)))
logger.warn(
'Attempt to retrieve artist information from MusicBrainz failed for artistid: %s (%s)' % (
artistid, str(e)))
mb_lock.snooze(5)
for rg in mb_extras_list:
@@ -348,14 +373,18 @@ def getArtist(artistid, extrasonly=False):
artist_dict['releasegroups'] = releasegroups
return artist_dict
def getSeries(seriesid):
series_dict = {}
series = None
try:
with mb_lock:
series = musicbrainzngs.get_series_by_id(seriesid,includes=['release-group-rels'])['series']
series = musicbrainzngs.get_series_by_id(seriesid, includes=['release-group-rels'])[
'series']
except musicbrainzngs.WebServiceError as e:
logger.warn('Attempt to retrieve series information from MusicBrainz failed for seriesid: %s (%s)' % (seriesid, str(e)))
logger.warn(
'Attempt to retrieve series information from MusicBrainz failed for seriesid: %s (%s)' % (
seriesid, str(e)))
mb_lock.snooze(5)
except Exception as e:
pass
@@ -364,7 +393,8 @@ def getSeries(seriesid):
return False
if 'disambiguation' in series:
series_dict['artist_name'] = unicode(series['name'] + " (" + unicode(series['disambiguation']) + ")")
series_dict['artist_name'] = unicode(
series['name'] + " (" + unicode(series['disambiguation']) + ")")
else:
series_dict['artist_name'] = unicode(series['name'])
@@ -373,14 +403,15 @@ def getSeries(seriesid):
for rg in series['release_group-relation-list']:
releasegroup = rg['release-group']
releasegroups.append({
'title':releasegroup['title'],
'date':releasegroup['first-release-date'],
'id':releasegroup['id'],
'type':rg['type']
})
'title': releasegroup['title'],
'date': releasegroup['first-release-date'],
'id': releasegroup['id'],
'type': rg['type']
})
series_dict['releasegroups'] = releasegroups
return series_dict
def getReleaseGroup(rgid):
"""
Returns a list of releases in a release group
@@ -392,7 +423,9 @@ def getReleaseGroup(rgid):
rgid, ["artists", "releases", "media", "discids", ])
releaseGroup = releaseGroup['release-group']
except musicbrainzngs.WebServiceError as e:
logger.warn('Attempt to retrieve information from MusicBrainz for release group "%s" failed (%s)' % (rgid, str(e)))
logger.warn(
'Attempt to retrieve information from MusicBrainz for release group "%s" failed (%s)' % (
rgid, str(e)))
mb_lock.snooze(5)
if not releaseGroup:
@@ -411,11 +444,16 @@ def getRelease(releaseid, include_artist_info=True):
try:
with mb_lock:
if include_artist_info:
results = musicbrainzngs.get_release_by_id(releaseid, ["artists", "release-groups", "media", "recordings"]).get('release')
results = musicbrainzngs.get_release_by_id(releaseid,
["artists", "release-groups", "media",
"recordings"]).get('release')
else:
results = musicbrainzngs.get_release_by_id(releaseid, ["media", "recordings"]).get('release')
results = musicbrainzngs.get_release_by_id(releaseid, ["media", "recordings"]).get(
'release')
except musicbrainzngs.WebServiceError as e:
logger.warn('Attempt to retrieve information from MusicBrainz for release "%s" failed (%s)' % (releaseid, str(e)))
logger.warn(
'Attempt to retrieve information from MusicBrainz for release "%s" failed (%s)' % (
releaseid, str(e)))
mb_lock.snooze(5)
if not results:
@@ -443,7 +481,8 @@ def getRelease(releaseid, include_artist_info=True):
try:
release['rg_type'] = unicode(results['release-group']['type'])
if release['rg_type'] == 'Album' and 'secondary-type-list' in results['release-group']:
if release['rg_type'] == 'Album' and 'secondary-type-list' in results[
'release-group']:
secondary_type = unicode(results['release-group']['secondary-type-list'][0])
if secondary_type != release['rg_type']:
release['rg_type'] = secondary_type
@@ -463,7 +502,6 @@ def getRelease(releaseid, include_artist_info=True):
def get_new_releases(rgid, includeExtras=False, forcefull=False):
myDB = db.DBConnection()
results = []
@@ -479,30 +517,33 @@ def get_new_releases(rgid, includeExtras=False, forcefull=False):
newResults = musicbrainzngs.browse_releases(
release_group=rgid,
includes=['artist-credits', 'labels', 'recordings', 'release-groups', 'media'],
release_status = release_status,
release_status=release_status,
limit=limit,
offset=len(results))
if 'release-list' not in newResults:
break #may want to raise an exception here instead ?
break # may want to raise an exception here instead ?
newResults = newResults['release-list']
results += newResults
except musicbrainzngs.WebServiceError as e:
logger.warn('Attempt to retrieve information from MusicBrainz for release group "%s" failed (%s)' % (rgid, str(e)))
logger.warn(
'Attempt to retrieve information from MusicBrainz for release group "%s" failed (%s)' % (
rgid, str(e)))
mb_lock.snooze(5)
return False
if not results or len(results) == 0:
return False
#Clean all references to releases in dB that are no longer referenced in musicbrainz
# Clean all references to releases in dB that are no longer referenced in musicbrainz
release_list = []
force_repackage1 = 0
if len(results) != 0:
for release_mark in results:
release_list.append(unicode(release_mark['id']))
release_title = release_mark['title']
remove_missing_releases = myDB.action("SELECT ReleaseID FROM allalbums WHERE AlbumID=?", [rgid])
remove_missing_releases = myDB.action("SELECT ReleaseID FROM allalbums WHERE AlbumID=?",
[rgid])
if remove_missing_releases:
for items in remove_missing_releases:
if items['ReleaseID'] not in release_list and items['ReleaseID'] != rgid:
@@ -511,10 +552,13 @@ def get_new_releases(rgid, includeExtras=False, forcefull=False):
myDB.action("DELETE FROM tracks WHERE ReleaseID=?", [items['ReleaseID']])
myDB.action("DELETE FROM allalbums WHERE ReleaseID=?", [items['ReleaseID']])
myDB.action("DELETE FROM alltracks WHERE ReleaseID=?", [items['ReleaseID']])
logger.info("Removing all references to release %s to reflect MusicBrainz" % items['ReleaseID'])
logger.info(
"Removing all references to release %s to reflect MusicBrainz" % items[
'ReleaseID'])
force_repackage1 = 1
else:
logger.info("There was either an error pulling data from MusicBrainz or there might not be any releases for this category")
logger.info(
"There was either an error pulling data from MusicBrainz or there might not be any releases for this category")
num_new_releases = 0
@@ -522,9 +566,10 @@ def get_new_releases(rgid, includeExtras=False, forcefull=False):
release = {}
rel_id_check = releasedata['id']
album_checker = myDB.action('SELECT * from allalbums WHERE ReleaseID=?', [rel_id_check]).fetchone()
album_checker = myDB.action('SELECT * from allalbums WHERE ReleaseID=?',
[rel_id_check]).fetchone()
if not album_checker or forcefull:
#DELETE all references to this release since we're updating it anyway.
# DELETE all references to this release since we're updating it anyway.
myDB.action('DELETE from allalbums WHERE ReleaseID=?', [rel_id_check])
myDB.action('DELETE from alltracks WHERE ReleaseID=?', [rel_id_check])
release['AlbumTitle'] = unicode(releasedata['title'])
@@ -533,7 +578,8 @@ def get_new_releases(rgid, includeExtras=False, forcefull=False):
release['ReleaseDate'] = unicode(releasedata['date']) if 'date' in releasedata else None
release['ReleaseID'] = releasedata['id']
if 'release-group' not in releasedata:
raise Exception('No release group associated with release id ' + releasedata['id'] + ' album id' + rgid)
raise Exception('No release group associated with release id ' + releasedata[
'id'] + ' album id' + rgid)
release['Type'] = unicode(releasedata['release-group']['type'])
if release['Type'] == 'Album' and 'secondary-type-list' in releasedata['release-group']:
@@ -541,7 +587,7 @@ def get_new_releases(rgid, includeExtras=False, forcefull=False):
if secondary_type != release['Type']:
release['Type'] = secondary_type
#making the assumption that the most important artist will be first in the list
# making the assumption that the most important artist will be first in the list
if 'artist-credit' in releasedata:
release['ArtistID'] = unicode(releasedata['artist-credit'][0]['artist']['id'])
release['ArtistName'] = unicode(releasedata['artist-credit-phrase'])
@@ -549,8 +595,9 @@ def get_new_releases(rgid, includeExtras=False, forcefull=False):
logger.warn('Release ' + releasedata['id'] + ' has no Artists associated.')
return False
release['ReleaseCountry'] = unicode(releasedata['country']) if 'country' in releasedata else u'Unknown'
#assuming that the list will contain media and that the format will be consistent
release['ReleaseCountry'] = unicode(
releasedata['country']) if 'country' in releasedata else u'Unknown'
# assuming that the list will contain media and that the format will be consistent
try:
additional_medium = ''
for position in releasedata['medium-list']:
@@ -562,16 +609,17 @@ def get_new_releases(rgid, includeExtras=False, forcefull=False):
disc_number = ''
else:
disc_number = str(medium_count) + 'x'
packaged_medium = disc_number + releasedata['medium-list'][0]['format'] + additional_medium
packaged_medium = disc_number + releasedata['medium-list'][0][
'format'] + additional_medium
release['ReleaseFormat'] = unicode(packaged_medium)
except:
release['ReleaseFormat'] = u'Unknown'
release['Tracks'] = getTracksFromRelease(releasedata)
# What we're doing here now is first updating the allalbums & alltracks table to the most
# current info, then moving the appropriate release into the album table and its associated
# tracks into the tracks table
# What we're doing here now is first updating the allalbums & alltracks table to the most
# current info, then moving the appropriate release into the album table and its associated
# tracks into the tracks table
controlValueDict = {"ReleaseID": release['ReleaseID']}
newValueDict = {"ArtistID": release['ArtistID'],
@@ -583,13 +631,14 @@ def get_new_releases(rgid, includeExtras=False, forcefull=False):
"Type": release['Type'],
"ReleaseCountry": release['ReleaseCountry'],
"ReleaseFormat": release['ReleaseFormat']
}
}
myDB.upsert("allalbums", newValueDict, controlValueDict)
for track in release['Tracks']:
cleanname = helpers.cleanName(release['ArtistName'] + ' ' + release['AlbumTitle'] + ' ' + track['title'])
cleanname = helpers.cleanName(
release['ArtistName'] + ' ' + release['AlbumTitle'] + ' ' + track['title'])
controlValueDict = {"TrackID": track['id'],
"ReleaseID": release['ReleaseID']}
@@ -603,30 +652,37 @@ def get_new_releases(rgid, includeExtras=False, forcefull=False):
"TrackDuration": track['duration'],
"TrackNumber": track['number'],
"CleanName": cleanname
}
}
match = myDB.action('SELECT Location, BitRate, Format from have WHERE CleanName=?', [cleanname]).fetchone()
match = myDB.action('SELECT Location, BitRate, Format from have WHERE CleanName=?',
[cleanname]).fetchone()
if not match:
match = myDB.action('SELECT Location, BitRate, Format from have WHERE ArtistName LIKE ? AND AlbumTitle LIKE ? AND TrackTitle LIKE ?', [release['ArtistName'], release['AlbumTitle'], track['title']]).fetchone()
#if not match:
#match = myDB.action('SELECT Location, BitRate, Format from have WHERE TrackID=?', [track['id']]).fetchone()
match = myDB.action(
'SELECT Location, BitRate, Format from have WHERE ArtistName LIKE ? AND AlbumTitle LIKE ? AND TrackTitle LIKE ?',
[release['ArtistName'], release['AlbumTitle'], track['title']]).fetchone()
# if not match:
# match = myDB.action('SELECT Location, BitRate, Format from have WHERE TrackID=?', [track['id']]).fetchone()
if match:
newValueDict['Location'] = match['Location']
newValueDict['BitRate'] = match['BitRate']
newValueDict['Format'] = match['Format']
#myDB.action('UPDATE have SET Matched="True" WHERE Location=?', [match['Location']])
myDB.action('UPDATE have SET Matched=? WHERE Location=?', (release['AlbumID'], match['Location']))
# myDB.action('UPDATE have SET Matched="True" WHERE Location=?', [match['Location']])
myDB.action('UPDATE have SET Matched=? WHERE Location=?',
(release['AlbumID'], match['Location']))
myDB.upsert("alltracks", newValueDict, controlValueDict)
num_new_releases = num_new_releases + 1
if album_checker:
logger.info('[%s] Existing release %s (%s) updated' % (release['ArtistName'], release['AlbumTitle'], rel_id_check))
logger.info('[%s] Existing release %s (%s) updated' % (
release['ArtistName'], release['AlbumTitle'], rel_id_check))
else:
logger.info('[%s] New release %s (%s) added' % (release['ArtistName'], release['AlbumTitle'], rel_id_check))
logger.info('[%s] New release %s (%s) added' % (
release['ArtistName'], release['AlbumTitle'], rel_id_check))
if force_repackage1 == 1:
num_new_releases = -1
logger.info('[%s] Forcing repackage of %s, since dB releases have been removed' % (release['ArtistName'], release_title))
logger.info('[%s] Forcing repackage of %s, since dB releases have been removed' % (
release['ArtistName'], release_title))
else:
num_new_releases = num_new_releases
@@ -643,23 +699,25 @@ def getTracksFromRelease(release):
except:
track_title = unicode(track['recording']['title'])
tracks.append({
'number': totalTracks,
'title': track_title,
'id': unicode(track['recording']['id']),
'url': u"http://musicbrainz.org/track/" + track['recording']['id'],
'duration': int(track['length']) if 'length' in track else 0
})
'number': totalTracks,
'title': track_title,
'id': unicode(track['recording']['id']),
'url': u"http://musicbrainz.org/track/" + track['recording']['id'],
'duration': int(track['length']) if 'length' in track else 0
})
totalTracks += 1
return tracks
# Used when there is a disambiguation
def findArtistbyAlbum(name):
myDB = db.DBConnection()
artist = myDB.action('SELECT AlbumTitle from have WHERE ArtistName=? AND AlbumTitle IS NOT NULL ORDER BY RANDOM()', [name]).fetchone()
artist = myDB.action(
'SELECT AlbumTitle from have WHERE ArtistName=? AND AlbumTitle IS NOT NULL ORDER BY RANDOM()',
[name]).fetchone()
if not artist:
return False
@@ -686,21 +744,20 @@ def findArtistbyAlbum(name):
for releaseGroup in results:
newArtist = releaseGroup['artist-credit'][0]['artist']
# Only need the artist ID if we're doing an artist+album lookup
#if 'disambiguation' in newArtist:
# if 'disambiguation' in newArtist:
# uniquename = unicode(newArtist['sort-name'] + " (" + newArtist['disambiguation'] + ")")
#else:
# else:
# uniquename = unicode(newArtist['sort-name'])
#artist_dict['name'] = unicode(newArtist['sort-name'])
#artist_dict['uniquename'] = uniquename
# artist_dict['name'] = unicode(newArtist['sort-name'])
# artist_dict['uniquename'] = uniquename
artist_dict['id'] = unicode(newArtist['id'])
#artist_dict['url'] = u'http://musicbrainz.org/artist/' + newArtist['id']
#artist_dict['score'] = int(releaseGroup['ext:score'])
# artist_dict['url'] = u'http://musicbrainz.org/artist/' + newArtist['id']
# artist_dict['score'] = int(releaseGroup['ext:score'])
return artist_dict
def findAlbumID(artist=None, album=None):
results = None
chars = set('!?*-')
@@ -717,9 +774,11 @@ def findAlbumID(artist=None, album=None):
album = '"' + album + '"'
criteria = {'release': album.lower()}
with mb_lock:
results = musicbrainzngs.search_release_groups(limit=1, **criteria).get('release-group-list')
results = musicbrainzngs.search_release_groups(limit=1, **criteria).get(
'release-group-list')
except musicbrainzngs.WebServiceError as e:
logger.warn('Attempt to query MusicBrainz for %s - %s failed (%s)' % (artist, album, str(e)))
logger.warn(
'Attempt to query MusicBrainz for %s - %s failed (%s)' % (artist, album, str(e)))
mb_lock.snooze(5)
if not results:

View File

@@ -13,14 +13,12 @@
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
import re
import json
import headphones
from headphones import db, helpers, logger, request
from headphones.common import USER_AGENT
def update(artistid, artist_name,release_groups):
def update(artistid, artist_name, release_groups):
""" Pretty simple and crude function to find the artist page on metacritic,
then parse that page to get critic & user scores for albums"""
@@ -28,12 +26,13 @@ def update(artistid, artist_name,release_groups):
# We could just do a search, then take the top result, but at least this will
# cut down on api calls. If it's ineffective then we'll switch to search
replacements = {" & " : " ", "." : ""}
mc_artist_name = helpers.replace_all(artist_name.lower(),replacements)
replacements = {" & ": " ", ".": ""}
mc_artist_name = helpers.replace_all(artist_name.lower(), replacements)
mc_artist_name = mc_artist_name.replace(" ","-")
mc_artist_name = mc_artist_name.replace(" ", "-")
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2243.2 Safari/537.36'}
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2243.2 Safari/537.36'}
url = "http://www.metacritic.com/person/" + mc_artist_name + "?filter-options=music&sort_options=date&num_items=100"
@@ -67,12 +66,12 @@ def update(artistid, artist_name,release_groups):
scores = row.find_all("span")
critic_score = scores[0].string
user_score = scores[1].string
score_dict = {'title':title,'critic_score':critic_score,'user_score':user_score}
score_dict = {'title': title, 'critic_score': critic_score, 'user_score': user_score}
score_list.append(score_dict)
# Save scores to the database
controlValueDict = {"ArtistID": artistid}
newValueDict = {'MetaCritic':json.dumps(score_list)}
newValueDict = {'MetaCritic': json.dumps(score_list)}
myDB.upsert("artists", newValueDict, controlValueDict)
for score in score_list:
@@ -84,5 +83,5 @@ def update(artistid, artist_name,release_groups):
critic_score = score['critic_score']
user_score = score['user_score']
controlValueDict = {"AlbumID": rg['id']}
newValueDict = {'CriticScore':critic_score,'UserScore':user_score}
newValueDict = {'CriticScore': critic_score, 'UserScore': user_score}
myDB.upsert("albums", newValueDict, controlValueDict)

View File

@@ -13,16 +13,17 @@
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
import os
import time
import shutil
import subprocess
import headphones
import multiprocessing
import os
import headphones
from headphones import logger
from beets.mediafile import MediaFile
# xld
import getXldProfile
@@ -32,9 +33,11 @@ def encode(albumPath):
# Return if xld details not found
if use_xld:
(xldProfile, xldFormat, xldBitrate) = getXldProfile.getXldProfile(headphones.CONFIG.XLDPROFILE)
(xldProfile, xldFormat, xldBitrate) = getXldProfile.getXldProfile(
headphones.CONFIG.XLDPROFILE)
if not xldFormat:
logger.error('Details for xld profile \'%s\' not found, files will not be re-encoded', xldProfile)
logger.error('Details for xld profile \'%s\' not found, files will not be re-encoded',
xldProfile)
return None
else:
xldProfile = None
@@ -60,7 +63,8 @@ def encode(albumPath):
for music in f:
if any(music.lower().endswith('.' + x.lower()) for x in headphones.MEDIA_FORMATS):
if not use_xld:
encoderFormat = headphones.CONFIG.ENCODEROUTPUTFORMAT.encode(headphones.SYS_ENCODING)
encoderFormat = headphones.CONFIG.ENCODEROUTPUTFORMAT.encode(
headphones.SYS_ENCODING)
else:
xldMusicFile = os.path.join(r, music)
xldInfoMusic = MediaFile(xldMusicFile)
@@ -68,9 +72,11 @@ def encode(albumPath):
if headphones.CONFIG.ENCODERLOSSLESS:
ext = os.path.normpath(os.path.splitext(music)[1].lstrip(".")).lower()
if not use_xld and ext == 'flac' or use_xld and (ext != xldFormat and (xldInfoMusic.bitrate / 1000 > 400)):
if not use_xld and ext == 'flac' or use_xld and (
ext != xldFormat and (xldInfoMusic.bitrate / 1000 > 400)):
musicFiles.append(os.path.join(r, music))
musicTemp = os.path.normpath(os.path.splitext(music)[0] + '.' + encoderFormat)
musicTemp = os.path.normpath(
os.path.splitext(music)[0] + '.' + encoderFormat)
musicTempFiles.append(os.path.join(tempDirEncode, musicTemp))
else:
logger.debug('%s is already encoded', music)
@@ -86,7 +92,7 @@ def encode(albumPath):
encoder = os.path.join('/Applications', 'xld')
elif headphones.CONFIG.ENCODER == 'lame':
if headphones.SYS_PLATFORM == "win32":
## NEED THE DEFAULT LAME INSTALL ON WIN!
# NEED THE DEFAULT LAME INSTALL ON WIN!
encoder = "C:/Program Files/lame/lame.exe"
else:
encoder = "lame"
@@ -111,24 +117,31 @@ def encode(albumPath):
if use_xld:
if xldBitrate and (infoMusic.bitrate / 1000 <= xldBitrate):
logger.info('%s has bitrate <= %skb, will not be re-encoded', music.decode(headphones.SYS_ENCODING, 'replace'), xldBitrate)
logger.info('%s has bitrate <= %skb, will not be re-encoded',
music.decode(headphones.SYS_ENCODING, 'replace'), xldBitrate)
else:
encode = True
elif headphones.CONFIG.ENCODER == 'lame':
if not any(music.decode(headphones.SYS_ENCODING, 'replace').lower().endswith('.' + x) for x in ["mp3", "wav"]):
logger.warn('Lame cannot encode %s format for %s, use ffmpeg', os.path.splitext(music)[1], music)
if not any(
music.decode(headphones.SYS_ENCODING, 'replace').lower().endswith('.' + x) for x
in ["mp3", "wav"]):
logger.warn('Lame cannot encode %s format for %s, use ffmpeg',
os.path.splitext(music)[1], music)
else:
if music.decode(headphones.SYS_ENCODING, 'replace').lower().endswith('.mp3') and (int(infoMusic.bitrate / 1000) <= headphones.CONFIG.BITRATE):
logger.info('%s has bitrate <= %skb, will not be re-encoded', music, headphones.CONFIG.BITRATE)
if music.decode(headphones.SYS_ENCODING, 'replace').lower().endswith('.mp3') and (
int(infoMusic.bitrate / 1000) <= headphones.CONFIG.BITRATE):
logger.info('%s has bitrate <= %skb, will not be re-encoded', music,
headphones.CONFIG.BITRATE)
else:
encode = True
else:
if headphones.CONFIG.ENCODEROUTPUTFORMAT == 'ogg':
if music.decode(headphones.SYS_ENCODING, 'replace').lower().endswith('.ogg'):
logger.warn('Cannot re-encode .ogg %s', music.decode(headphones.SYS_ENCODING, 'replace'))
logger.warn('Cannot re-encode .ogg %s',
music.decode(headphones.SYS_ENCODING, 'replace'))
else:
encode = True
elif headphones.CONFIG.ENCODEROUTPUTFORMAT == 'mp3' or headphones.CONFIG.ENCODEROUTPUTFORMAT == 'm4a':
else:
if music.decode(headphones.SYS_ENCODING, 'replace').lower().endswith('.' + headphones.CONFIG.ENCODEROUTPUTFORMAT) and (int(infoMusic.bitrate / 1000) <= headphones.CONFIG.BITRATE):
logger.info('%s has bitrate <= %skb, will not be re-encoded', music, headphones.CONFIG.BITRATE)
else:
@@ -155,7 +168,7 @@ def encode(albumPath):
processes = headphones.CONFIG.ENCODER_MULTICORE_COUNT
logger.debug("Multi-core encoding enabled, spawning %d processes",
processes)
processes)
# Use multiprocessing only if it's worth the overhead. and if it is
# enabled. If not, then use the old fashioned way.
@@ -185,7 +198,8 @@ def encode(albumPath):
for dest in musicTempFiles:
if not os.path.exists(dest):
encoder_failed = True
logger.error("Encoded file '%s' does not exist in the destination temp directory", dest)
logger.error("Encoded file '%s' does not exist in the destination temp directory",
dest)
# No errors, move from temp to parent
if not encoder_failed and musicTempFiles:
@@ -211,7 +225,9 @@ def encode(albumPath):
# Return with error if any encoding errors
if encoder_failed:
logger.error("One or more files failed to encode. Ensure you have the latest version of %s installed.", headphones.CONFIG.ENCODER)
logger.error(
"One or more files failed to encode. Ensure you have the latest version of %s installed.",
headphones.CONFIG.ENCODER)
return None
time.sleep(1)
@@ -269,7 +285,8 @@ def command(encoder, musicSource, musicDest, albumPath, xldProfile):
if not headphones.CONFIG.ADVANCEDENCODER:
opts.extend(['-h'])
if headphones.CONFIG.ENCODERVBRCBR == 'cbr':
opts.extend(['--resample', str(headphones.CONFIG.SAMPLINGFREQUENCY), '-b', str(headphones.CONFIG.BITRATE)])
opts.extend(['--resample', str(headphones.CONFIG.SAMPLINGFREQUENCY), '-b',
str(headphones.CONFIG.BITRATE)])
elif headphones.CONFIG.ENCODERVBRCBR == 'vbr':
opts.extend(['-v', str(headphones.CONFIG.ENCODERQUALITY)])
else:
@@ -290,7 +307,8 @@ def command(encoder, musicSource, musicDest, albumPath, xldProfile):
if headphones.CONFIG.ENCODEROUTPUTFORMAT == 'm4a':
opts.extend(['-strict', 'experimental'])
if headphones.CONFIG.ENCODERVBRCBR == 'cbr':
opts.extend(['-ar', str(headphones.CONFIG.SAMPLINGFREQUENCY), '-ab', str(headphones.CONFIG.BITRATE) + 'k'])
opts.extend(['-ar', str(headphones.CONFIG.SAMPLINGFREQUENCY), '-ab',
str(headphones.CONFIG.BITRATE) + 'k'])
elif headphones.CONFIG.ENCODERVBRCBR == 'vbr':
opts.extend(['-aq', str(headphones.CONFIG.ENCODERQUALITY)])
opts.extend(['-y', '-ac', '2', '-vn'])
@@ -311,7 +329,8 @@ def command(encoder, musicSource, musicDest, albumPath, xldProfile):
if headphones.CONFIG.ENCODEROUTPUTFORMAT == 'm4a':
opts.extend(['-strict', 'experimental'])
if headphones.CONFIG.ENCODERVBRCBR == 'cbr':
opts.extend(['-ar', str(headphones.CONFIG.SAMPLINGFREQUENCY), '-ab', str(headphones.CONFIG.BITRATE) + 'k'])
opts.extend(['-ar', str(headphones.CONFIG.SAMPLINGFREQUENCY), '-ab',
str(headphones.CONFIG.BITRATE) + 'k'])
elif headphones.CONFIG.ENCODERVBRCBR == 'vbr':
opts.extend(['-aq', str(headphones.CONFIG.ENCODERQUALITY)])
opts.extend(['-y', '-ac', '2', '-vn'])
@@ -337,13 +356,14 @@ def command(encoder, musicSource, musicDest, albumPath, xldProfile):
logger.debug(subprocess.list2cmdline(cmd))
process = subprocess.Popen(cmd, startupinfo=startupinfo,
stdin=open(os.devnull, 'rb'), stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdin=open(os.devnull, 'rb'), stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, stderr = process.communicate(headphones.CONFIG.ENCODER)
# Error if return code not zero
if process.returncode:
logger.error('Encoding failed for %s' % (musicSource.decode(headphones.SYS_ENCODING, 'replace')))
logger.error(
'Encoding failed for %s' % (musicSource.decode(headphones.SYS_ENCODING, 'replace')))
out = stdout if stdout else stderr
out = out.decode(headphones.SYS_ENCODING, 'replace')
outlast2lines = '\n'.join(out.splitlines()[-2:])

View File

@@ -13,30 +13,25 @@
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
from headphones import logger, helpers, common, request
from xml.dom import minidom
from httplib import HTTPSConnection
from urlparse import parse_qsl
from urllib import urlencode
from pynma import pynma
import base64
import cherrypy
import urllib
import urllib2
import headphones
import os.path
import subprocess
import gntp.notifier
import json
import oauth2 as oauth
import pythontwitter as twitter
from email.mime.text import MIMEText
import smtplib
import email.utils
from httplib import HTTPSConnection
from urlparse import parse_qsl
import urllib2
import os.path
from headphones import logger, helpers, common, request
from pynma import pynma
import cherrypy
import headphones
import gntp.notifier
import oauth2 as oauth
import pythontwitter as twitter
class GROWL(object):
@@ -95,7 +90,7 @@ class GROWL(object):
# Send it, including an image
image_file = os.path.join(str(headphones.PROG_DIR),
"data/images/headphoneslogo.png")
"data/images/headphoneslogo.png")
with open(image_file, 'rb') as f:
image = f.read()
@@ -114,7 +109,7 @@ class GROWL(object):
logger.info(u"Growl notifications sent.")
def updateLibrary(self):
#For uniformity reasons not removed
# For uniformity reasons not removed
return
def test(self, host, password):
@@ -151,24 +146,24 @@ class PROWL(object):
'priority': headphones.CONFIG.PROWL_PRIORITY}
http_handler.request("POST",
"/publicapi/add",
headers={'Content-type': "application/x-www-form-urlencoded"},
body=urlencode(data))
"/publicapi/add",
headers={'Content-type': "application/x-www-form-urlencoded"},
body=urlencode(data))
response = http_handler.getresponse()
request_status = response.status
if request_status == 200:
logger.info(u"Prowl notifications sent.")
return True
logger.info(u"Prowl notifications sent.")
return True
elif request_status == 401:
logger.info(u"Prowl auth failed: %s" % response.reason)
return False
logger.info(u"Prowl auth failed: %s" % response.reason)
return False
else:
logger.info(u"Prowl notification failed.")
return False
logger.info(u"Prowl notification failed.")
return False
def updateLibrary(self):
#For uniformity reasons not removed
# For uniformity reasons not removed
return
def test(self, keys, priority):
@@ -185,7 +180,6 @@ class MPC(object):
"""
def __init__(self):
pass
def notify(self):
@@ -218,9 +212,11 @@ class XBMC(object):
url = host + '/jsonrpc'
if self.password:
response = request.request_json(url, method="post", data=json.dumps(data), headers=headers, auth=(self.username, self.password))
response = request.request_json(url, method="post", data=json.dumps(data),
headers=headers, auth=(self.username, self.password))
else:
response = request.request_json(url, method="post", data=json.dumps(data), headers=headers)
response = request.request_json(url, method="post", data=json.dumps(data),
headers=headers)
if response:
return response[0]['result']
@@ -244,20 +240,24 @@ class XBMC(object):
header = "Headphones"
message = "%s - %s added to your library" % (artist, album)
time = "3000" # in ms
time = "3000" # in ms
for host in hosts:
logger.info('Sending notification command to XMBC @ ' + host)
try:
version = self._sendjson(host, 'Application.GetProperties', {'properties': ['version']})['version']['major']
version = \
self._sendjson(host, 'Application.GetProperties', {'properties': ['version']})[
'version']['major']
if version < 12: #Eden
if version < 12: # Eden
notification = header + "," + message + "," + time + "," + albumartpath
notifycommand = {'command': 'ExecBuiltIn', 'parameter': 'Notification(' + notification + ')'}
notifycommand = {'command': 'ExecBuiltIn',
'parameter': 'Notification(' + notification + ')'}
request = self._sendhttp(host, notifycommand)
else: #Frodo
params = {'title': header, 'message': message, 'displaytime': int(time), 'image': albumartpath}
else: # Frodo
params = {'title': header, 'message': message, 'displaytime': int(time),
'image': albumartpath}
request = self._sendjson(host, 'GUI.ShowNotification', params)
if not request:
@@ -335,9 +335,11 @@ class Plex(object):
url = host + '/jsonrpc'
if self.password:
response = request.request_json(url, method="post", data=json.dumps(data), headers=headers, auth=(self.username, self.password))
response = request.request_json(url, method="post", data=json.dumps(data),
headers=headers, auth=(self.username, self.password))
else:
response = request.request_json(url, method="post", data=json.dumps(data), headers=headers)
response = request.request_json(url, method="post", data=json.dumps(data),
headers=headers)
if response:
return response[0]['result']
@@ -376,20 +378,24 @@ class Plex(object):
header = "Headphones"
message = "%s - %s added to your library" % (artist, album)
time = "3000" # in ms
time = "3000" # in ms
for host in hosts:
logger.info('Sending notification command to Plex client @ ' + host)
try:
version = self._sendjson(host, 'Application.GetProperties', {'properties': ['version']})['version']['major']
version = \
self._sendjson(host, 'Application.GetProperties', {'properties': ['version']})[
'version']['major']
if version < 12: #Eden
if version < 12: # Eden
notification = header + "," + message + "," + time + "," + albumartpath
notifycommand = {'command': 'ExecBuiltIn', 'parameter': 'Notification(' + notification + ')'}
notifycommand = {'command': 'ExecBuiltIn',
'parameter': 'Notification(' + notification + ')'}
request = self._sendhttp(host, notifycommand)
else: #Frodo
params = {'title': header, 'message': message, 'displaytime': int(time), 'image': albumartpath}
else: # Frodo
params = {'title': header, 'message': message, 'displaytime': int(time),
'image': albumartpath}
request = self._sendjson(host, 'GUI.ShowNotification', params)
if not request:
@@ -438,7 +444,6 @@ class NMA(object):
class PUSHBULLET(object):
def __init__(self):
self.apikey = headphones.CONFIG.PUSHBULLET_APIKEY
self.deviceid = headphones.CONFIG.PUSHBULLET_DEVICEID
@@ -456,8 +461,8 @@ class PUSHBULLET(object):
if self.deviceid:
data['device_iden'] = self.deviceid
headers={'Content-type': "application/json",
'Authorization': 'Bearer ' + headphones.CONFIG.PUSHBULLET_APIKEY}
headers = {'Content-type': "application/json",
'Authorization': 'Bearer ' + headphones.CONFIG.PUSHBULLET_APIKEY}
response = request.request_json(url, method="post", headers=headers, data=json.dumps(data))
@@ -468,8 +473,8 @@ class PUSHBULLET(object):
logger.info(u"PushBullet notification failed.")
return False
class PUSHALOT(object):
class PUSHALOT(object):
def notify(self, message, event):
if not headphones.CONFIG.PUSHALOT_ENABLED:
return
@@ -487,9 +492,9 @@ class PUSHALOT(object):
'Body': message.encode("utf-8")}
http_handler.request("POST",
"/api/sendmessage",
headers={'Content-type': "application/x-www-form-urlencoded"},
body=urlencode(data))
"/api/sendmessage",
headers={'Content-type': "application/x-www-form-urlencoded"},
body=urlencode(data))
response = http_handler.getresponse()
request_status = response.status
@@ -498,14 +503,14 @@ class PUSHALOT(object):
logger.debug(u"Pushalot response body: %r" % response.read())
if request_status == 200:
logger.info(u"Pushalot notifications sent.")
return True
logger.info(u"Pushalot notifications sent.")
return True
elif request_status == 410:
logger.info(u"Pushalot auth failed: %s" % response.reason)
return False
logger.info(u"Pushalot auth failed: %s" % response.reason)
return False
else:
logger.info(u"Pushalot notification failed.")
return False
logger.info(u"Pushalot notification failed.")
return False
class Synoindex(object):
@@ -519,7 +524,8 @@ class Synoindex(object):
path = os.path.abspath(path)
if not self.util_exists():
logger.warn("Error sending notification: synoindex utility not found at %s" % self.util_loc)
logger.warn(
"Error sending notification: synoindex utility not found at %s" % self.util_loc)
return
if os.path.isfile(path):
@@ -527,15 +533,17 @@ class Synoindex(object):
elif os.path.isdir(path):
cmd_arg = '-A'
else:
logger.warn("Error sending notification: Path passed to synoindex was not a file or folder.")
logger.warn(
"Error sending notification: Path passed to synoindex was not a file or folder.")
return
cmd = [self.util_loc, cmd_arg, path]
logger.info("Calling synoindex command: %s" % str(cmd))
try:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, cwd=headphones.PROG_DIR)
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
cwd=headphones.PROG_DIR)
out, error = p.communicate()
#synoindex never returns any codes other than '0', highly irritating
# synoindex never returns any codes other than '0', highly irritating
except OSError, e:
logger.warn("Error sending notification: %s" % str(e))
@@ -546,7 +554,6 @@ class Synoindex(object):
class PUSHOVER(object):
def __init__(self):
self.enabled = headphones.CONFIG.PUSHOVER_ENABLED
self.keys = headphones.CONFIG.PUSHOVER_KEYS
@@ -584,7 +591,7 @@ class PUSHOVER(object):
return False
def updateLibrary(self):
#For uniformity reasons not removed
# For uniformity reasons not removed
return
def test(self, keys, priority):
@@ -596,7 +603,6 @@ class PUSHOVER(object):
class TwitterNotifier(object):
REQUEST_TOKEN_URL = 'https://api.twitter.com/oauth/request_token'
ACCESS_TOKEN_URL = 'https://api.twitter.com/oauth/access_token'
AUTHORIZATION_URL = 'https://api.twitter.com/oauth/authorize'
@@ -608,14 +614,17 @@ class TwitterNotifier(object):
def notify_snatch(self, title):
if headphones.CONFIG.TWITTER_ONSNATCH:
self._notifyTwitter(common.notifyStrings[common.NOTIFY_SNATCH] + ': ' + title + ' at ' + helpers.now())
self._notifyTwitter(
common.notifyStrings[common.NOTIFY_SNATCH] + ': ' + title + ' at ' + helpers.now())
def notify_download(self, title):
if headphones.CONFIG.TWITTER_ENABLED:
self._notifyTwitter(common.notifyStrings[common.NOTIFY_DOWNLOAD] + ': ' + title + ' at ' + helpers.now())
self._notifyTwitter(common.notifyStrings[
common.NOTIFY_DOWNLOAD] + ': ' + title + ' at ' + helpers.now())
def test_notify(self):
return self._notifyTwitter("This is a test notification from Headphones at " + helpers.now(), force=True)
return self._notifyTwitter(
"This is a test notification from Headphones at " + helpers.now(), force=True)
def _get_authorization(self):
@@ -652,7 +661,8 @@ class TwitterNotifier(object):
logger.info('oauth_consumer: ' + str(oauth_consumer))
oauth_client = oauth.Client(oauth_consumer, token)
logger.info('oauth_client: ' + str(oauth_client))
resp, content = oauth_client.request(self.ACCESS_TOKEN_URL, method='POST', body='oauth_verifier=%s' % key)
resp, content = oauth_client.request(self.ACCESS_TOKEN_URL, method='POST',
body='oauth_verifier=%s' % key)
logger.info('resp, content: ' + str(resp) + ',' + str(content))
access_token = dict(parse_qsl(content))
@@ -660,7 +670,8 @@ class TwitterNotifier(object):
logger.info('resp[status] = ' + str(resp['status']))
if resp['status'] != '200':
logger.info('The request for a token with did not succeed: ' + str(resp['status']), logger.ERROR)
logger.info('The request for a token with did not succeed: ' + str(resp['status']),
logger.ERROR)
return False
else:
logger.info('Your Twitter Access Token key: %s' % access_token['oauth_token'])
@@ -698,7 +709,6 @@ class TwitterNotifier(object):
class OSX_NOTIFY(object):
def __init__(self):
try:
self.objc = __import__("objc")
@@ -751,7 +761,7 @@ class OSX_NOTIFY(object):
if image:
source_img = self.AppKit.NSImage.alloc().initByReferencingFile_(image)
notification.setContentImage_(source_img)
#notification.set_identityImage_(source_img)
# notification.set_identityImage_(source_img)
notification.setHasActionButton_(False)
notification_center = NSUserNotificationCenter.defaultUserNotificationCenter()
@@ -769,7 +779,6 @@ class OSX_NOTIFY(object):
class BOXCAR(object):
def __init__(self):
self.url = 'https://new.boxcar.io/api/notifications'
@@ -783,7 +792,7 @@ class BOXCAR(object):
'notification[title]': title.encode('utf-8'),
'notification[long_message]': message.encode('utf-8'),
'notification[sound]': "done"
})
})
req = urllib2.Request(self.url)
handle = urllib2.urlopen(req, data)
@@ -796,7 +805,6 @@ class BOXCAR(object):
class SubSonicNotifier(object):
def __init__(self):
self.host = headphones.CONFIG.SUBSONIC_HOST
self.username = headphones.CONFIG.SUBSONIC_USERNAME
@@ -812,10 +820,10 @@ class SubSonicNotifier(object):
# Invoke request
request.request_response(self.host + "musicFolderSettings.view?scanNow",
auth=(self.username, self.password))
auth=(self.username, self.password))
class Email(object):
def notify(self, subject, message):
message = MIMEText(message, 'plain', "utf-8")
@@ -824,20 +832,24 @@ class Email(object):
message['To'] = headphones.CONFIG.EMAIL_TO
try:
if (headphones.CONFIG.EMAIL_SSL):
mailserver = smtplib.SMTP_SSL(headphones.CONFIG.EMAIL_SMTP_SERVER, headphones.CONFIG.EMAIL_SMTP_PORT)
if headphones.CONFIG.EMAIL_SSL:
mailserver = smtplib.SMTP_SSL(headphones.CONFIG.EMAIL_SMTP_SERVER,
headphones.CONFIG.EMAIL_SMTP_PORT)
else:
mailserver = smtplib.SMTP(headphones.CONFIG.EMAIL_SMTP_SERVER, headphones.CONFIG.EMAIL_SMTP_PORT)
mailserver = smtplib.SMTP(headphones.CONFIG.EMAIL_SMTP_SERVER,
headphones.CONFIG.EMAIL_SMTP_PORT)
if (headphones.CONFIG.EMAIL_TLS):
if headphones.CONFIG.EMAIL_TLS:
mailserver.starttls()
mailserver.ehlo()
if headphones.CONFIG.EMAIL_SMTP_USER:
mailserver.login(headphones.CONFIG.EMAIL_SMTP_USER, headphones.CONFIG.EMAIL_SMTP_PASSWORD)
mailserver.login(headphones.CONFIG.EMAIL_SMTP_USER,
headphones.CONFIG.EMAIL_SMTP_PASSWORD)
mailserver.sendmail(headphones.CONFIG.EMAIL_FROM, headphones.CONFIG.EMAIL_TO, message.as_string())
mailserver.sendmail(headphones.CONFIG.EMAIL_FROM, headphones.CONFIG.EMAIL_TO,
message.as_string())
mailserver.quit()
return True

View File

@@ -19,18 +19,15 @@
# along with Sick Beard. If not, see <http://www.gnu.org/licenses/>.
import httplib
import headphones
from base64 import standard_b64encode
import httplib
import xmlrpclib
import headphones
from headphones import logger
def sendNZB(nzb):
addToTop = False
nzbgetXMLrpc = "%(protocol)s://%(username)s:%(password)s@%(host)s/xmlrpc"
@@ -45,17 +42,22 @@ def sendNZB(nzb):
protocol = 'http'
host = headphones.CONFIG.NZBGET_HOST.replace('http://', '', 1)
url = nzbgetXMLrpc % {"protocol": protocol, "host": host, "username": headphones.CONFIG.NZBGET_USERNAME, "password": headphones.CONFIG.NZBGET_PASSWORD}
url = nzbgetXMLrpc % {"protocol": protocol, "host": host,
"username": headphones.CONFIG.NZBGET_USERNAME,
"password": headphones.CONFIG.NZBGET_PASSWORD}
nzbGetRPC = xmlrpclib.ServerProxy(url)
try:
if nzbGetRPC.writelog("INFO", "headphones connected to drop of %s any moment now." % (nzb.name + ".nzb")):
if nzbGetRPC.writelog("INFO", "headphones connected to drop of %s any moment now." % (
nzb.name + ".nzb")):
logger.debug(u"Successfully connected to NZBget")
else:
logger.info(u"Successfully connected to NZBget, but unable to send a message" % (nzb.name + ".nzb"))
logger.info(u"Successfully connected to NZBget, but unable to send a message" % (
nzb.name + ".nzb"))
except httplib.socket.error:
logger.error(u"Please check your NZBget host and port (if it is running). NZBget is not responding to this combination")
logger.error(
u"Please check your NZBget host and port (if it is running). NZBget is not responding to this combination")
return False
except xmlrpclib.ProtocolError, e:
@@ -82,7 +84,9 @@ def sendNZB(nzb):
nzbget_version = int(nzbget_version_str[:nzbget_version_str.find(".")])
if nzbget_version == 0:
if nzbcontent64 is not None:
nzbget_result = nzbGetRPC.append(nzb.name + ".nzb", headphones.CONFIG.NZBGET_CATEGORY, addToTop, nzbcontent64)
nzbget_result = nzbGetRPC.append(nzb.name + ".nzb",
headphones.CONFIG.NZBGET_CATEGORY, addToTop,
nzbcontent64)
else:
# from headphones.common.providers.generic import GenericProvider
# if nzb.resultType == "nzb":
@@ -95,24 +99,35 @@ def sendNZB(nzb):
return False
elif nzbget_version == 12:
if nzbcontent64 is not None:
nzbget_result = nzbGetRPC.append(nzb.name + ".nzb", headphones.CONFIG.NZBGET_CATEGORY, headphones.CONFIG.NZBGET_PRIORITY, False,
nzbget_result = nzbGetRPC.append(nzb.name + ".nzb",
headphones.CONFIG.NZBGET_CATEGORY,
headphones.CONFIG.NZBGET_PRIORITY, False,
nzbcontent64, False, dupekey, dupescore, "score")
else:
nzbget_result = nzbGetRPC.appendurl(nzb.name + ".nzb", headphones.CONFIG.NZBGET_CATEGORY, headphones.CONFIG.NZBGET_PRIORITY, False,
nzbget_result = nzbGetRPC.appendurl(nzb.name + ".nzb",
headphones.CONFIG.NZBGET_CATEGORY,
headphones.CONFIG.NZBGET_PRIORITY, False,
nzb.url, False, dupekey, dupescore, "score")
# v13+ has a new combined append method that accepts both (url and content)
# also the return value has changed from boolean to integer
# (Positive number representing NZBID of the queue item. 0 and negative numbers represent error codes.)
elif nzbget_version >= 13:
nzbget_result = True if nzbGetRPC.append(nzb.name + ".nzb", nzbcontent64 if nzbcontent64 is not None else nzb.url,
headphones.CONFIG.NZBGET_CATEGORY, headphones.CONFIG.NZBGET_PRIORITY, False, False, dupekey, dupescore,
nzbget_result = True if nzbGetRPC.append(nzb.name + ".nzb",
nzbcontent64 if nzbcontent64 is not None else nzb.url,
headphones.CONFIG.NZBGET_CATEGORY,
headphones.CONFIG.NZBGET_PRIORITY, False,
False, dupekey, dupescore,
"score") > 0 else False
else:
if nzbcontent64 is not None:
nzbget_result = nzbGetRPC.append(nzb.name + ".nzb", headphones.CONFIG.NZBGET_CATEGORY, headphones.CONFIG.NZBGET_PRIORITY, False,
nzbget_result = nzbGetRPC.append(nzb.name + ".nzb",
headphones.CONFIG.NZBGET_CATEGORY,
headphones.CONFIG.NZBGET_PRIORITY, False,
nzbcontent64)
else:
nzbget_result = nzbGetRPC.appendurl(nzb.name + ".nzb", headphones.CONFIG.NZBGET_CATEGORY, headphones.CONFIG.NZBGET_PRIORITY, False,
nzbget_result = nzbGetRPC.appendurl(nzb.name + ".nzb",
headphones.CONFIG.NZBGET_CATEGORY,
headphones.CONFIG.NZBGET_PRIORITY, False,
nzb.url)
if nzbget_result:
@@ -122,5 +137,6 @@ def sendNZB(nzb):
logger.error(u"NZBget could not add %s to the queue" % (nzb.name + ".nzb"))
return False
except:
logger.error(u"Connect Error to NZBget: could not add %s to the queue" % (nzb.name + ".nzb"))
logger.error(
u"Connect Error to NZBget: could not add %s to the queue" % (nzb.name + ".nzb"))
return False

View File

@@ -13,20 +13,20 @@
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
import os
import re
import shutil
import uuid
import beets
import threading
import itertools
import headphones
import tempfile
import os
import re
import beets
import headphones
from beets import autotag
from beets import config as beetsconfig
from beets.mediafile import MediaFile, FileTypeError, UnreadableFileError
from beetsplug import lyrics as beetslyrics
from headphones import notifiers, utorrent, transmission
from headphones import db, albumart, librarysync
from headphones import logger, helpers, request, mb, music_encoder
@@ -48,19 +48,21 @@ def checkFolder():
else:
download_dir = headphones.CONFIG.DOWNLOAD_TORRENT_DIR
album_path = os.path.join(download_dir, album['FolderName']).encode(headphones.SYS_ENCODING, 'replace')
album_path = os.path.join(download_dir, album['FolderName']).encode(
headphones.SYS_ENCODING, 'replace')
logger.debug("Checking if %s exists" % album_path)
if os.path.exists(album_path):
logger.info('Found "' + album['FolderName'] + '" in ' + album['Kind'] + ' download folder. Verifying....')
logger.info('Found "' + album['FolderName'] + '" in ' + album[
'Kind'] + ' download folder. Verifying....')
verify(album['AlbumID'], album_path, album['Kind'])
else:
logger.info("No folder name found for " + album['Title'])
logger.debug("Checking download folder finished.")
def verify(albumid, albumpath, Kind=None, forced=False, keep_original_folder=False):
def verify(albumid, albumpath, Kind=None, forced=False, keep_original_folder=False):
myDB = db.DBConnection()
release = myDB.action('SELECT * from albums WHERE AlbumID=?', [albumid]).fetchone()
tracks = myDB.select('SELECT * from tracks WHERE AlbumID=?', [albumid])
@@ -72,11 +74,14 @@ def verify(albumid, albumpath, Kind=None, forced=False, keep_original_folder=Fal
try:
release_list = mb.getReleaseGroup(albumid)
except Exception as e:
logger.error('Unable to get release information for manual album with rgid: %s. Error: %s', albumid, e)
logger.error(
'Unable to get release information for manual album with rgid: %s. Error: %s',
albumid, e)
return
if not release_list:
logger.error('Unable to get release information for manual album with rgid: %s', albumid)
logger.error('Unable to get release information for manual album with rgid: %s',
albumid)
return
# Since we're just using this to create the bare minimum information to
@@ -85,7 +90,9 @@ def verify(albumid, albumpath, Kind=None, forced=False, keep_original_folder=Fal
release_dict = mb.getRelease(releaseid)
if not release_dict:
logger.error('Unable to get release information for manual album with rgid: %s. Cannot continue', albumid)
logger.error(
'Unable to get release information for manual album with rgid: %s. Cannot continue',
albumid)
return
# Check if the artist is added to the database. In case the database is
@@ -93,18 +100,26 @@ def verify(albumid, albumpath, Kind=None, forced=False, keep_original_folder=Fal
# prevents new artists from appearing suddenly. In case forced is True,
# this check is skipped, since it is assumed the user wants this.
if headphones.CONFIG.FREEZE_DB and not forced:
artist = myDB.select("SELECT ArtistName, ArtistID FROM artists WHERE ArtistId=? OR ArtistName=?", [release_dict['artist_id'], release_dict['artist_name']])
artist = myDB.select(
"SELECT ArtistName, ArtistID FROM artists WHERE ArtistId=? OR ArtistName=?",
[release_dict['artist_id'], release_dict['artist_name']])
if not artist:
logger.warn("Continuing would add new artist '%s' (ID %s), " \
"but database is frozen. Will skip postprocessing for " \
"album with rgid: %s", release_dict['artist_name'],
release_dict['artist_id'], albumid)
"but database is frozen. Will skip postprocessing for " \
"album with rgid: %s", release_dict['artist_name'],
release_dict['artist_id'], albumid)
myDB.action('UPDATE snatched SET status = "Frozen" WHERE status NOT LIKE "Seed%" and AlbumID=?', [albumid])
myDB.action(
'UPDATE snatched SET status = "Frozen" WHERE status NOT LIKE "Seed%" and AlbumID=?',
[albumid])
frozen = re.search(r' \(Frozen\)(?:\[\d+\])?', albumpath)
if not frozen:
renameUnprocessedFolder(albumpath, tag="Frozen")
if headphones.CONFIG.RENAME_FROZEN:
renameUnprocessedFolder(albumpath, tag="Frozen")
else:
logger.warn(u"Won't rename %s to mark as 'Frozen', because it is disabled.",
albumpath.decode(headphones.SYS_ENCODING, 'replace'))
return
logger.info(u"Now adding/updating artist: " + release_dict['artist_name'])
@@ -120,7 +135,8 @@ def verify(albumid, albumpath, Kind=None, forced=False, keep_original_folder=Fal
"DateAdded": helpers.today(),
"Status": "Paused"}
logger.info("ArtistID: " + release_dict['artist_id'] + " , ArtistName: " + release_dict['artist_name'])
logger.info("ArtistID: " + release_dict['artist_id'] + " , ArtistName: " + release_dict[
'artist_name'])
if headphones.CONFIG.INCLUDE_EXTRAS:
newValueDict['IncludeExtras'] = 1
@@ -147,18 +163,17 @@ def verify(albumid, albumpath, Kind=None, forced=False, keep_original_folder=Fal
# Delete existing tracks associated with this AlbumID since we're going to replace them and don't want any extras
myDB.action('DELETE from tracks WHERE AlbumID=?', [albumid])
for track in release_dict['tracks']:
controlValueDict = {"TrackID": track['id'],
"AlbumID": albumid}
newValueDict = {"ArtistID": release_dict['artist_id'],
"ArtistName": release_dict['artist_name'],
"AlbumTitle": release_dict['title'],
"AlbumASIN": release_dict['asin'],
"TrackTitle": track['title'],
"TrackDuration": track['duration'],
"TrackNumber": track['number']
}
"ArtistName": release_dict['artist_name'],
"AlbumTitle": release_dict['title'],
"AlbumASIN": release_dict['asin'],
"TrackTitle": track['title'],
"TrackDuration": track['duration'],
"TrackNumber": track['number']
}
myDB.upsert("tracks", newValueDict, controlValueDict)
@@ -166,7 +181,8 @@ def verify(albumid, albumpath, Kind=None, forced=False, keep_original_folder=Fal
newValueDict = {"Status": "Paused"}
myDB.upsert("artists", newValueDict, controlValueDict)
logger.info(u"Addition complete for: " + release_dict['title'] + " - " + release_dict['artist_name'])
logger.info(u"Addition complete for: " + release_dict['title'] + " - " + release_dict[
'artist_name'])
release = myDB.action('SELECT * from albums WHERE AlbumID=?', [albumid]).fetchone()
tracks = myDB.select('SELECT * from tracks WHERE AlbumID=?', [albumid])
@@ -182,11 +198,14 @@ def verify(albumid, albumpath, Kind=None, forced=False, keep_original_folder=Fal
downloaded_cuecount += 1
# if any of the files end in *.part, we know the torrent isn't done yet. Process if forced, though
elif files.lower().endswith(('.part', '.utpart')) and not forced:
logger.info("Looks like " + os.path.basename(albumpath).decode(headphones.SYS_ENCODING, 'replace') + " isn't complete yet. Will try again on the next run")
logger.info(
"Looks like " + os.path.basename(albumpath).decode(headphones.SYS_ENCODING,
'replace') + " isn't complete yet. Will try again on the next run")
return
# Split cue
if headphones.CONFIG.CUE_SPLIT and downloaded_cuecount and downloaded_cuecount >= len(downloaded_track_list):
if headphones.CONFIG.CUE_SPLIT and downloaded_cuecount and downloaded_cuecount >= len(
downloaded_track_list):
if headphones.CONFIG.KEEP_TORRENT_FILES and Kind == "torrent":
albumpath = helpers.preserve_torrent_directory(albumpath)
if albumpath and helpers.cue_split(albumpath):
@@ -199,7 +218,10 @@ def verify(albumid, albumpath, Kind=None, forced=False, keep_original_folder=Fal
try:
f = MediaFile(downloaded_track)
except Exception as e:
logger.info(u"Exception from MediaFile for: " + downloaded_track.decode(headphones.SYS_ENCODING, 'replace') + u" : " + unicode(e))
logger.info(
u"Exception from MediaFile for: " + downloaded_track.decode(headphones.SYS_ENCODING,
'replace') + u" : " + unicode(
e))
continue
if not f.artist:
@@ -216,7 +238,8 @@ def verify(albumid, albumpath, Kind=None, forced=False, keep_original_folder=Fal
logger.debug('Matching metadata album: %s with album name: %s' % (metaalbum, dbalbum))
if metaartist == dbartist and metaalbum == dbalbum:
doPostProcessing(albumid, albumpath, release, tracks, downloaded_track_list, Kind, keep_original_folder)
doPostProcessing(albumid, albumpath, release, tracks, downloaded_track_list, Kind,
keep_original_folder)
return
# test #2: filenames
@@ -234,7 +257,8 @@ def verify(albumid, albumpath, Kind=None, forced=False, keep_original_folder=Fal
logger.debug('Checking if track title: %s is in file name: %s' % (dbtrack, filetrack))
if dbtrack in filetrack:
doPostProcessing(albumid, albumpath, release, tracks, downloaded_track_list, Kind, keep_original_folder)
doPostProcessing(albumid, albumpath, release, tracks, downloaded_track_list, Kind,
keep_original_folder)
return
# test #3: number of songs and duration
@@ -266,29 +290,39 @@ def verify(albumid, albumpath, Kind=None, forced=False, keep_original_folder=Fal
logger.debug('Database track duration: %i' % db_track_duration)
delta = abs(downloaded_track_duration - db_track_duration)
if delta < 240:
doPostProcessing(albumid, albumpath, release, tracks, downloaded_track_list, Kind, keep_original_folder)
doPostProcessing(albumid, albumpath, release, tracks, downloaded_track_list, Kind,
keep_original_folder)
return
logger.warn(u'Could not identify album: %s. It may not be the intended album.' % albumpath.decode(headphones.SYS_ENCODING, 'replace'))
myDB.action('UPDATE snatched SET status = "Unprocessed" WHERE status NOT LIKE "Seed%" and AlbumID=?', [albumid])
logger.warn(u'Could not identify album: %s. It may not be the intended album.',
albumpath.decode(headphones.SYS_ENCODING, 'replace'))
myDB.action(
'UPDATE snatched SET status = "Unprocessed" WHERE status NOT LIKE "Seed%" and AlbumID=?',
[albumid])
processed = re.search(r' \(Unprocessed\)(?:\[\d+\])?', albumpath)
if not processed:
renameUnprocessedFolder(albumpath, tag="Unprocessed")
if headphones.CONFIG.RENAME_UNPROCESSED:
renameUnprocessedFolder(albumpath, tag="Unprocessed")
else:
logger.warn(u"Won't rename %s to mark as 'Unprocessed', because it is disabled.",
albumpath.decode(headphones.SYS_ENCODING, 'replace'))
def doPostProcessing(albumid, albumpath, release, tracks, downloaded_track_list, Kind=None, keep_original_folder=False):
def doPostProcessing(albumid, albumpath, release, tracks, downloaded_track_list, Kind=None,
keep_original_folder=False):
logger.info('Starting post-processing for: %s - %s' % (release['ArtistName'], release['AlbumTitle']))
new_folder = None
# Check to see if we're preserving the torrent dir
if (headphones.CONFIG.KEEP_TORRENT_FILES and Kind == "torrent" and 'headphones-modified' not in albumpath) or headphones.CONFIG.KEEP_ORIGINAL_FOLDER or keep_original_folder:
new_folder = os.path.join(albumpath, 'headphones-modified'.encode(headphones.SYS_ENCODING, 'replace'))
logger.info("Copying files to 'headphones-modified' subfolder to preserve downloaded files for seeding")
new_folder = os.path.join(tempfile.mkdtemp(prefix="headphones_"), "headphones")
logger.info("Copying files to " + new_folder.decode(headphones.SYS_ENCODING, 'replace') + " subfolder to preserve downloaded files for seeding")
try:
shutil.copytree(albumpath, new_folder)
# Update the album path with the new location
albumpath = new_folder
except Exception as e:
logger.warn("Cannot copy/move files to temp folder: " + new_folder.decode(headphones.SYS_ENCODING, 'replace') + ". Not continuing. Error: " + str(e))
shutil.rmtree(new_folder)
return
# Need to update the downloaded track list with the new location.
@@ -311,20 +345,22 @@ def doPostProcessing(albumid, albumpath, release, tracks, downloaded_track_list,
return
except (FileTypeError, UnreadableFileError):
logger.error("Track file is not a valid media file: %s. Not " \
"continuing.", downloaded_track.decode(
headphones.SYS_ENCODING, "replace"))
"continuing.", downloaded_track.decode(
headphones.SYS_ENCODING, "replace"))
return
except IOError:
logger.error("Unable to find media file: %s. Not continuing.")
if new_folder:
shutil.rmtree(new_folder)
return
# If one of the options below is set, it will access/touch/modify the
# files, which requires write permissions. This step just check this, so
# it will not try and fail lateron, with strange exceptions.
if headphones.CONFIG.EMBED_ALBUM_ART or headphones.CONFIG.CLEANUP_FILES or \
headphones.CONFIG.ADD_ALBUM_ART or headphones.CONFIG.CORRECT_METADATA or \
headphones.CONFIG.EMBED_LYRICS or headphones.CONFIG.RENAME_FILES or \
headphones.CONFIG.MOVE_FILES:
headphones.CONFIG.ADD_ALBUM_ART or headphones.CONFIG.CORRECT_METADATA or \
headphones.CONFIG.EMBED_LYRICS or headphones.CONFIG.RENAME_FILES or \
headphones.CONFIG.MOVE_FILES:
try:
with open(downloaded_track, "a+b") as fp:
@@ -332,15 +368,19 @@ def doPostProcessing(albumid, albumpath, release, tracks, downloaded_track_list,
except IOError as e:
logger.debug("Write check exact error: %s", e)
logger.error("Track file is not writable. This is required " \
"for some post processing steps: %s. Not continuing.",
downloaded_track.decode(headphones.SYS_ENCODING, "replace"))
"for some post processing steps: %s. Not continuing.",
downloaded_track.decode(headphones.SYS_ENCODING, "replace"))
if new_folder:
shutil.rmtree(new_folder)
return
#start encoding
# start encoding
if headphones.CONFIG.MUSIC_ENCODER:
downloaded_track_list = music_encoder.encode(albumpath)
if not downloaded_track_list:
if new_folder:
shutil.rmtree(new_folder)
return
artwork = None
@@ -374,6 +414,8 @@ def doPostProcessing(albumid, albumpath, release, tracks, downloaded_track_list,
if headphones.CONFIG.CORRECT_METADATA:
correctedMetadata = correctMetadata(albumid, release, downloaded_track_list)
if not correctedMetadata and headphones.CONFIG.DO_NOT_PROCESS_UNMATCHED:
if new_folder:
shutil.rmtree(new_folder)
return
if headphones.CONFIG.EMBED_LYRICS:
@@ -383,7 +425,8 @@ def doPostProcessing(albumid, albumpath, release, tracks, downloaded_track_list,
renameFiles(albumpath, downloaded_track_list, release)
if headphones.CONFIG.MOVE_FILES and not headphones.CONFIG.DESTINATION_DIR:
logger.error('No DESTINATION_DIR has been set. Set "Destination Directory" to the parent directory you want to move the files to')
logger.error(
'No DESTINATION_DIR has been set. Set "Destination Directory" to the parent directory you want to move the files to')
albumpaths = [albumpath]
elif headphones.CONFIG.MOVE_FILES and headphones.CONFIG.DESTINATION_DIR:
albumpaths = moveFiles(albumpath, release, tracks)
@@ -394,15 +437,20 @@ def doPostProcessing(albumid, albumpath, release, tracks, downloaded_track_list,
myDB = db.DBConnection()
myDB.action('UPDATE albums SET status = "Downloaded" WHERE AlbumID=?', [albumid])
myDB.action('UPDATE snatched SET status = "Processed" WHERE Status NOT LIKE "Seed%" and AlbumID=?', [albumid])
myDB.action(
'UPDATE snatched SET status = "Processed" WHERE Status NOT LIKE "Seed%" and AlbumID=?',
[albumid])
# Check if torrent has finished seeding
if headphones.CONFIG.TORRENT_DOWNLOADER == 1 or headphones.CONFIG.TORRENT_DOWNLOADER == 2:
seed_snatched = myDB.action('SELECT * from snatched WHERE Status="Seed_Snatched" and AlbumID=?', [albumid]).fetchone()
seed_snatched = myDB.action(
'SELECT * from snatched WHERE Status="Seed_Snatched" and AlbumID=?',
[albumid]).fetchone()
if seed_snatched:
hash = seed_snatched['FolderName']
torrent_removed = False
logger.info(u'%s - %s. Checking if torrent has finished seeding and can be removed' % (release['ArtistName'], release['AlbumTitle']))
logger.info(u'%s - %s. Checking if torrent has finished seeding and can be removed' % (
release['ArtistName'], release['AlbumTitle']))
if headphones.CONFIG.TORRENT_DOWNLOADER == 1:
torrent_removed = transmission.removeTorrent(hash, True)
else:
@@ -410,15 +458,20 @@ def doPostProcessing(albumid, albumpath, release, tracks, downloaded_track_list,
# Torrent removed, delete the snatched record, else update Status for scheduled job to check
if torrent_removed:
myDB.action('DELETE from snatched WHERE status = "Seed_Snatched" and AlbumID=?', [albumid])
myDB.action('DELETE from snatched WHERE status = "Seed_Snatched" and AlbumID=?',
[albumid])
else:
myDB.action('UPDATE snatched SET status = "Seed_Processed" WHERE status = "Seed_Snatched" and AlbumID=?', [albumid])
myDB.action(
'UPDATE snatched SET status = "Seed_Processed" WHERE status = "Seed_Snatched" and AlbumID=?',
[albumid])
# Update the have tracks for all created dirs:
for albumpath in albumpaths:
librarysync.libraryScan(dir=albumpath, append=True, ArtistID=release['ArtistID'], ArtistName=release['ArtistName'])
librarysync.libraryScan(dir=albumpath, append=True, ArtistID=release['ArtistID'],
ArtistName=release['ArtistName'])
logger.info(u'Post-processing for %s - %s complete' % (release['ArtistName'], release['AlbumTitle']))
logger.info(
u'Post-processing for %s - %s complete' % (release['ArtistName'], release['AlbumTitle']))
pushmessage = release['ArtistName'] + ' - ' + release['AlbumTitle']
statusmessage = "Download and Postprocessing completed"
@@ -516,6 +569,9 @@ def doPostProcessing(albumid, albumpath, release, tracks, downloaded_track_list,
subject = release['ArtistName'] + ' - ' + release['AlbumTitle']
email.notify(subject, "Download and Postprocessing completed")
if new_folder:
shutil.rmtree(new_folder)
def embedAlbumArt(artwork, downloaded_track_list):
logger.info('Embedding album art')
@@ -524,7 +580,8 @@ def embedAlbumArt(artwork, downloaded_track_list):
try:
f = MediaFile(downloaded_track)
except:
logger.error(u'Could not read %s. Not adding album art' % downloaded_track.decode(headphones.SYS_ENCODING, 'replace'))
logger.error(u'Could not read %s. Not adding album art' % downloaded_track.decode(
headphones.SYS_ENCODING, 'replace'))
continue
logger.debug('Adding album art to: %s' % downloaded_track)
@@ -533,7 +590,8 @@ def embedAlbumArt(artwork, downloaded_track_list):
f.art = artwork
f.save()
except Exception as e:
logger.error(u'Error embedding album art to: %s. Error: %s' % (downloaded_track.decode(headphones.SYS_ENCODING, 'replace'), str(e)))
logger.error(u'Error embedding album art to: %s. Error: %s' % (
downloaded_track.decode(headphones.SYS_ENCODING, 'replace'), str(e)))
continue
@@ -546,16 +604,18 @@ def addAlbumArt(artwork, albumpath, release):
year = ''
values = {'$Artist': release['ArtistName'],
'$Album': release['AlbumTitle'],
'$Year': year,
'$artist': release['ArtistName'].lower(),
'$album': release['AlbumTitle'].lower(),
'$year': year
}
'$Album': release['AlbumTitle'],
'$Year': year,
'$artist': release['ArtistName'].lower(),
'$album': release['AlbumTitle'].lower(),
'$year': year
}
album_art_name = helpers.replace_all(headphones.CONFIG.ALBUM_ART_FORMAT.strip(), values) + ".jpg"
album_art_name = helpers.replace_all(headphones.CONFIG.ALBUM_ART_FORMAT.strip(),
values) + ".jpg"
album_art_name = helpers.replace_illegal_chars(album_art_name).encode(headphones.SYS_ENCODING, 'replace')
album_art_name = helpers.replace_illegal_chars(album_art_name).encode(headphones.SYS_ENCODING,
'replace')
if headphones.CONFIG.FILE_UNDERSCORES:
album_art_name = album_art_name.replace(' ', '_')
@@ -581,7 +641,8 @@ def cleanupFiles(albumpath):
try:
os.remove(os.path.join(r, files))
except Exception as e:
logger.error(u'Could not remove file: %s. Error: %s' % (files.decode(headphones.SYS_ENCODING, 'replace'), e))
logger.error(u'Could not remove file: %s. Error: %s' % (
files.decode(headphones.SYS_ENCODING, 'replace'), e))
def renameNFO(albumpath):
@@ -590,12 +651,16 @@ def renameNFO(albumpath):
for r, d, f in os.walk(albumpath):
for file in f:
if file.lower().endswith('.nfo'):
logger.debug('Renaming: "%s" to "%s"' % (file.decode(headphones.SYS_ENCODING, 'replace'), file.decode(headphones.SYS_ENCODING, 'replace') + '-orig'))
logger.debug('Renaming: "%s" to "%s"' % (
file.decode(headphones.SYS_ENCODING, 'replace'),
file.decode(headphones.SYS_ENCODING, 'replace') + '-orig'))
try:
new_file_name = os.path.join(r, file)[:-3] + 'orig.nfo'
os.rename(os.path.join(r, file), new_file_name)
except Exception as e:
logger.error(u'Could not rename file: %s. Error: %s' % (os.path.join(r, file).decode(headphones.SYS_ENCODING, 'replace'), e))
logger.error(u'Could not rename file: %s. Error: %s' % (
os.path.join(r, file).decode(headphones.SYS_ENCODING, 'replace'), e))
def moveFiles(albumpath, release, tracks):
logger.info("Moving files: %s" % albumpath)
@@ -624,25 +689,26 @@ def moveFiles(albumpath, release, tracks):
for r, d, f in os.walk(albumpath):
try:
origfolder = os.path.basename(os.path.normpath(r).decode(headphones.SYS_ENCODING, 'replace'))
origfolder = os.path.basename(
os.path.normpath(r).decode(headphones.SYS_ENCODING, 'replace'))
except:
origfolder = u''
values = {'$Artist': artist,
'$SortArtist': sortname,
'$Album': album,
'$Year': year,
'$Type': releasetype,
'$OriginalFolder': origfolder,
'$First': firstchar.upper(),
'$artist': artist.lower(),
'$sortartist': sortname.lower(),
'$album': album.lower(),
'$year': year,
'$type': releasetype.lower(),
'$first': firstchar.lower(),
'$originalfolder': origfolder.lower()
}
'$SortArtist': sortname,
'$Album': album,
'$Year': year,
'$Type': releasetype,
'$OriginalFolder': origfolder,
'$First': firstchar.upper(),
'$artist': artist.lower(),
'$sortartist': sortname.lower(),
'$album': album.lower(),
'$year': year,
'$type': releasetype.lower(),
'$first': firstchar.lower(),
'$originalfolder': origfolder.lower()
}
folder = helpers.replace_all(headphones.CONFIG.FOLDER_FORMAT.strip(), values, normalize=True)
@@ -666,15 +732,20 @@ def moveFiles(albumpath, release, tracks):
files_to_move.append(os.path.join(r, files))
if any(files.lower().endswith('.' + x.lower()) for x in headphones.LOSSY_MEDIA_FORMATS):
lossy_media = True
if any(files.lower().endswith('.' + x.lower()) for x in headphones.LOSSLESS_MEDIA_FORMATS):
if any(files.lower().endswith('.' + x.lower()) for x in
headphones.LOSSLESS_MEDIA_FORMATS):
lossless_media = True
# Do some sanity checking to see what directories we need to create:
make_lossy_folder = False
make_lossless_folder = False
lossy_destination_path = os.path.normpath(os.path.join(headphones.CONFIG.DESTINATION_DIR, folder)).encode(headphones.SYS_ENCODING, 'replace')
lossless_destination_path = os.path.normpath(os.path.join(headphones.CONFIG.LOSSLESS_DESTINATION_DIR, folder)).encode(headphones.SYS_ENCODING, 'replace')
lossy_destination_path = os.path.normpath(
os.path.join(headphones.CONFIG.DESTINATION_DIR, folder)).encode(headphones.SYS_ENCODING,
'replace')
lossless_destination_path = os.path.normpath(
os.path.join(headphones.CONFIG.LOSSLESS_DESTINATION_DIR, folder)).encode(
headphones.SYS_ENCODING, 'replace')
# If they set a destination dir for lossless media, only create the lossy folder if there is lossy media
if headphones.CONFIG.LOSSLESS_DESTINATION_DIR:
@@ -698,7 +769,9 @@ def moveFiles(albumpath, release, tracks):
try:
shutil.rmtree(lossless_destination_path)
except Exception as e:
logger.error("Error deleting existing folder: %s. Creating duplicate folder. Error: %s" % (lossless_destination_path.decode(headphones.SYS_ENCODING, 'replace'), e))
logger.error(
"Error deleting existing folder: %s. Creating duplicate folder. Error: %s" % (
lossless_destination_path.decode(headphones.SYS_ENCODING, 'replace'), e))
create_duplicate_folder = True
if not headphones.CONFIG.REPLACE_EXISTING_FOLDERS or create_duplicate_folder:
@@ -707,7 +780,9 @@ def moveFiles(albumpath, release, tracks):
i = 1
while True:
newfolder = temp_folder + '[%i]' % i
lossless_destination_path = os.path.normpath(os.path.join(headphones.CONFIG.LOSSLESS_DESTINATION_DIR, newfolder)).encode(headphones.SYS_ENCODING, 'replace')
lossless_destination_path = os.path.normpath(
os.path.join(headphones.CONFIG.LOSSLESS_DESTINATION_DIR, newfolder)).encode(
headphones.SYS_ENCODING, 'replace')
if os.path.exists(lossless_destination_path):
i += 1
else:
@@ -718,7 +793,8 @@ def moveFiles(albumpath, release, tracks):
try:
os.makedirs(lossless_destination_path)
except Exception as e:
logger.error('Could not create lossless folder for %s. (Error: %s)' % (release['AlbumTitle'], e))
logger.error('Could not create lossless folder for %s. (Error: %s)' % (
release['AlbumTitle'], e))
if not make_lossy_folder:
return [albumpath]
@@ -731,7 +807,9 @@ def moveFiles(albumpath, release, tracks):
try:
shutil.rmtree(lossy_destination_path)
except Exception as e:
logger.error("Error deleting existing folder: %s. Creating duplicate folder. Error: %s" % (lossy_destination_path.decode(headphones.SYS_ENCODING, 'replace'), e))
logger.error(
"Error deleting existing folder: %s. Creating duplicate folder. Error: %s" % (
lossy_destination_path.decode(headphones.SYS_ENCODING, 'replace'), e))
create_duplicate_folder = True
if not headphones.CONFIG.REPLACE_EXISTING_FOLDERS or create_duplicate_folder:
@@ -740,7 +818,9 @@ def moveFiles(albumpath, release, tracks):
i = 1
while True:
newfolder = temp_folder + '[%i]' % i
lossy_destination_path = os.path.normpath(os.path.join(headphones.CONFIG.DESTINATION_DIR, newfolder)).encode(headphones.SYS_ENCODING, 'replace')
lossy_destination_path = os.path.normpath(
os.path.join(headphones.CONFIG.DESTINATION_DIR, newfolder)).encode(
headphones.SYS_ENCODING, 'replace')
if os.path.exists(lossy_destination_path):
i += 1
else:
@@ -751,7 +831,8 @@ def moveFiles(albumpath, release, tracks):
try:
os.makedirs(lossy_destination_path)
except Exception as e:
logger.error('Could not create folder for %s. Not moving: %s' % (release['AlbumTitle'], e))
logger.error(
'Could not create folder for %s. Not moving: %s' % (release['AlbumTitle'], e))
return [albumpath]
logger.info('Checking which files we need to move.....')
@@ -762,26 +843,34 @@ def moveFiles(albumpath, release, tracks):
for file_to_move in files_to_move:
if any(file_to_move.lower().endswith('.' + x.lower()) for x in headphones.LOSSY_MEDIA_FORMATS):
if any(file_to_move.lower().endswith('.' + x.lower()) for x in
headphones.LOSSY_MEDIA_FORMATS):
helpers.smartMove(file_to_move, lossy_destination_path)
elif any(file_to_move.lower().endswith('.' + x.lower()) for x in headphones.LOSSLESS_MEDIA_FORMATS):
elif any(file_to_move.lower().endswith('.' + x.lower()) for x in
headphones.LOSSLESS_MEDIA_FORMATS):
helpers.smartMove(file_to_move, lossless_destination_path)
# If it's a non-music file, move it to both dirs
# TODO: Move specific-to-lossless files to the lossless dir only
else:
moved_to_lossy_folder = helpers.smartMove(file_to_move, lossy_destination_path, delete=False)
moved_to_lossless_folder = helpers.smartMove(file_to_move, lossless_destination_path, delete=False)
moved_to_lossy_folder = helpers.smartMove(file_to_move, lossy_destination_path,
delete=False)
moved_to_lossless_folder = helpers.smartMove(file_to_move,
lossless_destination_path,
delete=False)
if moved_to_lossy_folder or moved_to_lossless_folder:
try:
os.remove(file_to_move)
except Exception as e:
logger.error("Error deleting file '" + file_to_move.decode(headphones.SYS_ENCODING, 'replace') + "' from source directory")
logger.error(
"Error deleting file '" + file_to_move.decode(headphones.SYS_ENCODING,
'replace') + "' from source directory")
else:
logger.error("Error copying '" + file_to_move.decode(headphones.SYS_ENCODING, 'replace') + "'. Not deleting from download directory")
logger.error("Error copying '" + file_to_move.decode(headphones.SYS_ENCODING,
'replace') + "'. Not deleting from download directory")
elif make_lossless_folder and not make_lossy_folder:
@@ -809,10 +898,16 @@ def moveFiles(albumpath, release, tracks):
temp_f = os.path.join(temp_f, f)
try:
os.chmod(os.path.normpath(temp_f).encode(headphones.SYS_ENCODING, 'replace'), int(headphones.CONFIG.FOLDER_PERMISSIONS, 8))
except Exception as e:
logger.error("Error trying to change permissions on folder: %s. %s", temp_f, e)
if headphones.CONFIG.FOLDER_PERMISSIONS_ENABLED:
try:
os.chmod(os.path.normpath(temp_f).encode(headphones.SYS_ENCODING, 'replace'),
int(headphones.CONFIG.FOLDER_PERMISSIONS, 8))
except Exception as e:
logger.error("Error trying to change permissions on folder: %s. %s",
temp_f.decode(headphones.SYS_ENCODING, 'replace'), e)
else:
logger.debug("Not changing folder permissions, since it is disabled: %s",
temp_f.decode(headphones.SYS_ENCODING, 'replace'))
# If we failed to move all the files out of the directory, this will fail too
try:
@@ -831,7 +926,6 @@ def moveFiles(albumpath, release, tracks):
def correctMetadata(albumid, release, downloaded_track_list):
logger.info('Preparing to write metadata to tracks....')
lossy_items = []
lossless_items = []
@@ -841,14 +935,18 @@ def correctMetadata(albumid, release, downloaded_track_list):
try:
if any(downloaded_track.lower().endswith('.' + x.lower()) for x in headphones.LOSSLESS_MEDIA_FORMATS):
if any(downloaded_track.lower().endswith('.' + x.lower()) for x in
headphones.LOSSLESS_MEDIA_FORMATS):
lossless_items.append(beets.library.Item.from_path(downloaded_track))
elif any(downloaded_track.lower().endswith('.' + x.lower()) for x in headphones.LOSSY_MEDIA_FORMATS):
elif any(downloaded_track.lower().endswith('.' + x.lower()) for x in
headphones.LOSSY_MEDIA_FORMATS):
lossy_items.append(beets.library.Item.from_path(downloaded_track))
else:
logger.warn("Skipping: %s because it is not a mutagen friendly file format", downloaded_track.decode(headphones.SYS_ENCODING, 'replace'))
logger.warn("Skipping: %s because it is not a mutagen friendly file format",
downloaded_track.decode(headphones.SYS_ENCODING, 'replace'))
except Exception as e:
logger.error("Beets couldn't create an Item from: %s - not a media file? %s", downloaded_track.decode(headphones.SYS_ENCODING, 'replace'), str(e))
logger.error("Beets couldn't create an Item from: %s - not a media file? %s",
downloaded_track.decode(headphones.SYS_ENCODING, 'replace'), str(e))
for items in [lossy_items, lossless_items]:
@@ -856,18 +954,24 @@ def correctMetadata(albumid, release, downloaded_track_list):
continue
try:
cur_artist, cur_album, candidates, rec = autotag.tag_album(items, search_artist=helpers.latinToAscii(release['ArtistName']), search_album=helpers.latinToAscii(release['AlbumTitle']))
cur_artist, cur_album, candidates, rec = autotag.tag_album(items,
search_artist=helpers.latinToAscii(
release['ArtistName']),
search_album=helpers.latinToAscii(
release['AlbumTitle']))
except Exception as e:
logger.error('Error getting recommendation: %s. Not writing metadata', e)
return False
if str(rec) == 'Recommendation.none':
logger.warn('No accurate album match found for %s, %s - not writing metadata', release['ArtistName'], release['AlbumTitle'])
logger.warn('No accurate album match found for %s, %s - not writing metadata',
release['ArtistName'], release['AlbumTitle'])
return False
if candidates:
dist, info, mapping, extra_items, extra_tracks = candidates[0]
else:
logger.warn('No accurate album match found for %s, %s - not writing metadata', release['ArtistName'], release['AlbumTitle'])
logger.warn('No accurate album match found for %s, %s - not writing metadata',
release['ArtistName'], release['AlbumTitle'])
return False
logger.info('Beets recommendation for tagging items: %s' % rec)
@@ -887,13 +991,16 @@ def correctMetadata(albumid, release, downloaded_track_list):
for item in items:
try:
item.write()
logger.info("Successfully applied metadata to: %s", item.path.decode(headphones.SYS_ENCODING, 'replace'))
logger.info("Successfully applied metadata to: %s",
item.path.decode(headphones.SYS_ENCODING, 'replace'))
except Exception as e:
logger.warn("Error writing metadata to '%s': %s", item.path.decode(headphones.SYS_ENCODING, 'replace'), str(e))
logger.warn("Error writing metadata to '%s': %s",
item.path.decode(headphones.SYS_ENCODING, 'replace'), str(e))
return False
return True
def embedLyrics(downloaded_track_list):
logger.info('Adding lyrics')
@@ -907,14 +1014,18 @@ def embedLyrics(downloaded_track_list):
for downloaded_track in downloaded_track_list:
try:
if any(downloaded_track.lower().endswith('.' + x.lower()) for x in headphones.LOSSLESS_MEDIA_FORMATS):
if any(downloaded_track.lower().endswith('.' + x.lower()) for x in
headphones.LOSSLESS_MEDIA_FORMATS):
lossless_items.append(beets.library.Item.from_path(downloaded_track))
elif any(downloaded_track.lower().endswith('.' + x.lower()) for x in headphones.LOSSY_MEDIA_FORMATS):
elif any(downloaded_track.lower().endswith('.' + x.lower()) for x in
headphones.LOSSY_MEDIA_FORMATS):
lossy_items.append(beets.library.Item.from_path(downloaded_track))
else:
logger.warn("Skipping: %s because it is not a mutagen friendly file format", downloaded_track.decode(headphones.SYS_ENCODING, 'replace'))
logger.warn("Skipping: %s because it is not a mutagen friendly file format",
downloaded_track.decode(headphones.SYS_ENCODING, 'replace'))
except Exception as e:
logger.error("Beets couldn't create an Item from: %s - not a media file? %s", downloaded_track.decode(headphones.SYS_ENCODING, 'replace'), str(e))
logger.error("Beets couldn't create an Item from: %s - not a media file? %s",
downloaded_track.decode(headphones.SYS_ENCODING, 'replace'), str(e))
for items in [lossy_items, lossless_items]:
@@ -954,7 +1065,8 @@ def renameFiles(albumpath, downloaded_track_list, release):
try:
f = MediaFile(downloaded_track)
except:
logger.info("MediaFile couldn't parse: %s", downloaded_track.decode(headphones.SYS_ENCODING, 'replace'))
logger.info("MediaFile couldn't parse: %s",
downloaded_track.decode(headphones.SYS_ENCODING, 'replace'))
continue
if not f.disc:
@@ -989,26 +1101,28 @@ def renameFiles(albumpath, downloaded_track_list, release):
sortname = artistname
values = {'$Disc': discnumber,
'$Track': tracknumber,
'$Title': title,
'$Artist': artistname,
'$SortArtist': sortname,
'$Album': release['AlbumTitle'],
'$Year': year,
'$disc': discnumber,
'$track': tracknumber,
'$title': title.lower(),
'$artist': artistname.lower(),
'$sortartist': sortname.lower(),
'$album': release['AlbumTitle'].lower(),
'$year': year
}
'$Track': tracknumber,
'$Title': title,
'$Artist': artistname,
'$SortArtist': sortname,
'$Album': release['AlbumTitle'],
'$Year': year,
'$disc': discnumber,
'$track': tracknumber,
'$title': title.lower(),
'$artist': artistname.lower(),
'$sortartist': sortname.lower(),
'$album': release['AlbumTitle'].lower(),
'$year': year
}
ext = os.path.splitext(downloaded_track)[1]
new_file_name = helpers.replace_all(headphones.CONFIG.FILE_FORMAT.strip(), values).replace('/', '_') + ext
new_file_name = helpers.replace_all(headphones.CONFIG.FILE_FORMAT.strip(),
values).replace('/', '_') + ext
new_file_name = helpers.replace_illegal_chars(new_file_name).encode(headphones.SYS_ENCODING, 'replace')
new_file_name = helpers.replace_illegal_chars(new_file_name).encode(headphones.SYS_ENCODING,
'replace')
if headphones.CONFIG.FILE_UNDERSCORES:
new_file_name = new_file_name.replace(' ', '_')
@@ -1019,29 +1133,36 @@ def renameFiles(albumpath, downloaded_track_list, release):
new_file = os.path.join(albumpath, new_file_name)
if downloaded_track == new_file_name:
logger.debug("Renaming for: " + downloaded_track.decode(headphones.SYS_ENCODING, 'replace') + " is not neccessary")
logger.debug("Renaming for: " + downloaded_track.decode(headphones.SYS_ENCODING,
'replace') + " is not neccessary")
continue
logger.debug('Renaming %s ---> %s', downloaded_track.decode(headphones.SYS_ENCODING, 'replace'), new_file_name.decode(headphones.SYS_ENCODING, 'replace'))
logger.debug('Renaming %s ---> %s',
downloaded_track.decode(headphones.SYS_ENCODING, 'replace'),
new_file_name.decode(headphones.SYS_ENCODING, 'replace'))
try:
os.rename(downloaded_track, new_file)
except Exception as e:
logger.error('Error renaming file: %s. Error: %s', downloaded_track.decode(headphones.SYS_ENCODING, 'replace'), e)
logger.error('Error renaming file: %s. Error: %s',
downloaded_track.decode(headphones.SYS_ENCODING, 'replace'), e)
continue
def updateFilePermissions(albumpaths):
for folder in albumpaths:
logger.info("Updating file permissions in %s", folder)
for r, d, f in os.walk(folder):
for files in f:
full_path = os.path.join(r, files)
try:
os.chmod(full_path, int(headphones.CONFIG.FILE_PERMISSIONS, 8))
except:
logger.error("Could not change permissions for file: %s", full_path)
continue
if headphones.CONFIG.FILE_PERMISSIONS_ENABLED:
try:
os.chmod(full_path, int(headphones.CONFIG.FILE_PERMISSIONS, 8))
except:
logger.error("Could not change permissions for file: %s", full_path)
continue
else:
logger.debug("Not changing file permissions, since it is disabled: %s",
full_path.decode(headphones.SYS_ENCODING, 'replace'))
def renameUnprocessedFolder(path, tag):
@@ -1064,7 +1185,6 @@ def renameUnprocessedFolder(path, tag):
def forcePostProcess(dir=None, expand_subfolders=True, album_dir=None, keep_original_folder=False):
logger.info('Force checking download folder for completed downloads')
ignored = 0
@@ -1077,9 +1197,11 @@ def forcePostProcess(dir=None, expand_subfolders=True, album_dir=None, keep_orig
if dir:
download_dirs.append(dir.encode(headphones.SYS_ENCODING, 'replace'))
if headphones.CONFIG.DOWNLOAD_DIR and not dir:
download_dirs.append(headphones.CONFIG.DOWNLOAD_DIR.encode(headphones.SYS_ENCODING, 'replace'))
download_dirs.append(
headphones.CONFIG.DOWNLOAD_DIR.encode(headphones.SYS_ENCODING, 'replace'))
if headphones.CONFIG.DOWNLOAD_TORRENT_DIR and not dir:
download_dirs.append(headphones.CONFIG.DOWNLOAD_TORRENT_DIR.encode(headphones.SYS_ENCODING, 'replace'))
download_dirs.append(
headphones.CONFIG.DOWNLOAD_TORRENT_DIR.encode(headphones.SYS_ENCODING, 'replace'))
# If DOWNLOAD_DIR and DOWNLOAD_TORRENT_DIR are the same, remove the duplicate to prevent us from trying to process the same folder twice.
download_dirs = list(set(download_dirs))
@@ -1096,7 +1218,8 @@ def forcePostProcess(dir=None, expand_subfolders=True, album_dir=None, keep_orig
# Scan for subfolders
subfolders = os.listdir(download_dir)
ignored += helpers.path_filter_patterns(subfolders,
headphones.CONFIG.IGNORED_FOLDERS, root=download_dir)
headphones.CONFIG.IGNORED_FOLDERS,
root=download_dir)
for folder in subfolders:
path_to_folder = os.path.join(download_dir, folder)
@@ -1113,7 +1236,7 @@ def forcePostProcess(dir=None, expand_subfolders=True, album_dir=None, keep_orig
if folders:
logger.debug('Expanded post processing folders: %s', folders)
logger.info('Found %d folders to process (%d ignored).',
len(folders), ignored)
len(folders), ignored)
else:
logger.info('Found no folders to process. Aborting.')
return
@@ -1131,15 +1254,23 @@ def forcePostProcess(dir=None, expand_subfolders=True, album_dir=None, keep_orig
# underscores -> dots (this might be hit or miss since it assumes all
# spaces/underscores came from sab replacing values
logger.debug('Attempting to find album in the snatched table')
snatched = myDB.action('SELECT AlbumID, Title, Kind, Status from snatched WHERE FolderName LIKE ?', [folder_basename]).fetchone()
snatched = myDB.action(
'SELECT AlbumID, Title, Kind, Status from snatched WHERE FolderName LIKE ?',
[folder_basename]).fetchone()
if snatched:
if headphones.CONFIG.KEEP_TORRENT_FILES and snatched['Kind'] == 'torrent' and snatched['Status'] == 'Processed':
logger.info('%s is a torrent folder being preserved for seeding and has already been processed. Skipping.', folder_basename)
if headphones.CONFIG.KEEP_TORRENT_FILES and snatched['Kind'] == 'torrent' and snatched[
'Status'] == 'Processed':
logger.info(
'%s is a torrent folder being preserved for seeding and has already been processed. Skipping.',
folder_basename)
continue
else:
logger.info('Found a match in the database: %s. Verifying to make sure it is the correct album', snatched['Title'])
verify(snatched['AlbumID'], folder, snatched['Kind'], keep_original_folder=keep_original_folder)
logger.info(
'Found a match in the database: %s. Verifying to make sure it is the correct album',
snatched['Title'])
verify(snatched['AlbumID'], folder, snatched['Kind'],
keep_original_folder=keep_original_folder)
continue
# Attempt 2: strip release group id from filename
@@ -1153,13 +1284,19 @@ def forcePostProcess(dir=None, expand_subfolders=True, album_dir=None, keep_orig
if rgid:
rgid = possible_rgid
release = myDB.action('SELECT ArtistName, AlbumTitle, AlbumID from albums WHERE AlbumID=?', [rgid]).fetchone()
release = myDB.action(
'SELECT ArtistName, AlbumTitle, AlbumID from albums WHERE AlbumID=?',
[rgid]).fetchone()
if release:
logger.info('Found a match in the database: %s - %s. Verifying to make sure it is the correct album', release['ArtistName'], release['AlbumTitle'])
verify(release['AlbumID'], folder, forced=True, keep_original_folder=keep_original_folder)
logger.info(
'Found a match in the database: %s - %s. Verifying to make sure it is the correct album',
release['ArtistName'], release['AlbumTitle'])
verify(release['AlbumID'], folder, forced=True,
keep_original_folder=keep_original_folder)
continue
else:
logger.info('Found a (possibly) valid Musicbrainz release group id in album folder name.')
logger.info(
'Found a (possibly) valid Musicbrainz release group id in album folder name.')
verify(rgid, folder, forced=True)
continue
@@ -1172,13 +1309,18 @@ def forcePostProcess(dir=None, expand_subfolders=True, album_dir=None, keep_orig
name = album = year = None
if name and album:
release = myDB.action('SELECT AlbumID, ArtistName, AlbumTitle from albums WHERE ArtistName LIKE ? and AlbumTitle LIKE ?', [name, album]).fetchone()
release = myDB.action(
'SELECT AlbumID, ArtistName, AlbumTitle from albums WHERE ArtistName LIKE ? and AlbumTitle LIKE ?',
[name, album]).fetchone()
if release:
logger.info('Found a match in the database: %s - %s. Verifying to make sure it is the correct album', release['ArtistName'], release['AlbumTitle'])
logger.info(
'Found a match in the database: %s - %s. Verifying to make sure it is the correct album',
release['ArtistName'], release['AlbumTitle'])
verify(release['AlbumID'], folder, keep_original_folder=keep_original_folder)
continue
else:
logger.info('Querying MusicBrainz for the release group id for: %s - %s', name, album)
logger.info('Querying MusicBrainz for the release group id for: %s - %s', name,
album)
try:
rgid = mb.findAlbumID(helpers.latinToAscii(name), helpers.latinToAscii(album))
except:
@@ -1207,13 +1349,18 @@ def forcePostProcess(dir=None, expand_subfolders=True, album_dir=None, keep_orig
name = album = None
if name and album:
release = myDB.action('SELECT AlbumID, ArtistName, AlbumTitle from albums WHERE ArtistName LIKE ? and AlbumTitle LIKE ?', [name, album]).fetchone()
release = myDB.action(
'SELECT AlbumID, ArtistName, AlbumTitle from albums WHERE ArtistName LIKE ? and AlbumTitle LIKE ?',
[name, album]).fetchone()
if release:
logger.info('Found a match in the database: %s - %s. Verifying to make sure it is the correct album', release['ArtistName'], release['AlbumTitle'])
logger.info(
'Found a match in the database: %s - %s. Verifying to make sure it is the correct album',
release['ArtistName'], release['AlbumTitle'])
verify(release['AlbumID'], folder, keep_original_folder=keep_original_folder)
continue
else:
logger.info('Querying MusicBrainz for the release group id for: %s - %s', name, album)
logger.info('Querying MusicBrainz for the release group id for: %s - %s', name,
album)
try:
rgid = mb.findAlbumID(helpers.latinToAscii(name), helpers.latinToAscii(album))
except:
@@ -1231,13 +1378,18 @@ def forcePostProcess(dir=None, expand_subfolders=True, album_dir=None, keep_orig
logger.debug('Attempt to extract album name by assuming it is the folder name')
if '-' not in folder_basename:
release = myDB.action('SELECT AlbumID, ArtistName, AlbumTitle from albums WHERE AlbumTitle LIKE ?', [folder_basename]).fetchone()
release = myDB.action(
'SELECT AlbumID, ArtistName, AlbumTitle from albums WHERE AlbumTitle LIKE ?',
[folder_basename]).fetchone()
if release:
logger.info('Found a match in the database: %s - %s. Verifying to make sure it is the correct album', release['ArtistName'], release['AlbumTitle'])
logger.info(
'Found a match in the database: %s - %s. Verifying to make sure it is the correct album',
release['ArtistName'], release['AlbumTitle'])
verify(release['AlbumID'], folder, keep_original_folder=keep_original_folder)
continue
else:
logger.info('Querying MusicBrainz for the release group id for: %s', folder_basename)
logger.info('Querying MusicBrainz for the release group id for: %s',
folder_basename)
try:
rgid = mb.findAlbumID(album=helpers.latinToAscii(folder_basename))
except:
@@ -1252,6 +1404,6 @@ def forcePostProcess(dir=None, expand_subfolders=True, album_dir=None, keep_orig
# Fail here
logger.info("Couldn't parse '%s' into any valid format. If adding " \
"albums from another source, they must be in an 'Artist - Album " \
"[Year]' format, or end with the musicbrainz release group id.",
folder_basename)
"albums from another source, they must be in an 'Artist - Album " \
"[Year]' format, or end with the musicbrainz release group id.",
folder_basename)

View File

@@ -13,16 +13,20 @@
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
from headphones import logger
from xml.dom import minidom
from bs4 import BeautifulSoup
import collections
import sys
from bs4 import BeautifulSoup
import requests
from headphones import logger
import feedparser
import headphones
import headphones.lock
import collections
# Disable SSL certificate warnings. We have our own handling
requests.packages.urllib3.disable_warnings()
# Dictionary with last request times, for rate limiting.
last_requests = collections.defaultdict(int)
@@ -51,6 +55,14 @@ def request_response(url, method="get", auto_raise=True,
# pose a security issue!
kwargs["verify"] = bool(headphones.CONFIG.VERIFY_SSL_CERT)
#This fix is put in place for systems with broken SSL (like QNAP)
if not headphones.CONFIG.VERIFY_SSL_CERT and sys.version_info >= (2, 7, 9):
try:
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
except:
pass
# Map method to the request.XXX method. This is a simple hack, but it
# allows requests to apply more magic per method. See lib/requests/api.py.
request_method = getattr(requests, method.lower())
@@ -95,7 +107,8 @@ def request_response(url, method="get", auto_raise=True,
"host is up and running.")
except requests.Timeout:
logger.error(
"Request timed out. The remote host did not respond in a timely manner.")
"Request timed out. The remote host did not respond in a timely "
"manner.")
except requests.HTTPError as e:
if e.response is not None:
if e.response.status_code >= 500:
@@ -206,7 +219,8 @@ def server_message(response):
message = None
# First attempt is to 'read' the response as HTML
if response.headers.get("content-type") and "text/html" in response.headers.get("content-type"):
if response.headers.get("content-type") and \
"text/html" in response.headers.get("content-type"):
try:
soup = BeautifulSoup(response.content, "html5lib")
except Exception:

View File

@@ -1,19 +1,17 @@
#!/usr/bin/env python
import urllib
import requests as requests
from urlparse import urlparse
from bs4 import BeautifulSoup
import os
import time
import re
from urlparse import urlparse
import re
import requests as requests
from bs4 import BeautifulSoup
import headphones
from headphones import logger
class Rutracker(object):
class Rutracker(object):
def __init__(self):
self.session = requests.session()
self.timeout = 60
@@ -58,7 +56,8 @@ class Rutracker(object):
self.loggedin = True
logger.info("Successfully logged in to rutracker")
else:
logger.error("Could not login to rutracker, credentials maybe incorrect, site is down or too many attempts. Try again later")
logger.error(
"Could not login to rutracker, credentials maybe incorrect, site is down or too many attempts. Try again later")
self.loggedin = False
return self.loggedin
except Exception as e:
@@ -111,7 +110,7 @@ class Rutracker(object):
soup = BeautifulSoup(r.content, 'html5lib')
# Debug
#logger.debug (soup.prettify())
# logger.debug (soup.prettify())
# Check if still logged in
if not self.still_logged_in(soup):
@@ -130,7 +129,8 @@ class Rutracker(object):
return None
minimumseeders = int(headphones.CONFIG.NUMBEROFSEEDERS) - 1
for item in zip(i.find_all(class_='hl-tags'),i.find_all(class_='dl-stub'),i.find_all(class_='seedmed')):
for item in zip(i.find_all(class_='hl-tags'), i.find_all(class_='dl-stub'),
i.find_all(class_='seedmed')):
title = item[0].get_text()
url = item[1].get('href')
size_formatted = item[1].get_text()[:-2]
@@ -149,12 +149,15 @@ class Rutracker(object):
if size < self.maxsize and minimumseeders < int(seeds):
logger.info('Found %s. Size: %s' % (title, size_formatted))
#Torrent topic page
torrent_id = dict([part.split('=') for part in urlparse(url)[4].split('&')])['t']
# Torrent topic page
torrent_id = dict([part.split('=') for part in urlparse(url)[4].split('&')])[
't']
topicurl = 'http://rutracker.org/forum/viewtopic.php?t=' + torrent_id
rulist.append((title, size, topicurl, 'rutracker.org', 'torrent', True))
else:
logger.info("%s is larger than the maxsize or has too little seeders for this category, skipping. (Size: %i bytes, Seeders: %i)" % (title, size, int(seeds)))
logger.info(
"%s is larger than the maxsize or has too little seeders for this category, skipping. (Size: %i bytes, Seeders: %i)" % (
title, size, int(seeds)))
if not rulist:
logger.info("No valid results found from rutracker")
@@ -165,7 +168,6 @@ class Rutracker(object):
logger.error("An unknown error occurred in the rutracker parser: %s" % e)
return None
def get_torrent_data(self, url):
"""
return the .torrent data
@@ -176,14 +178,14 @@ class Rutracker(object):
cookie = {'bb_dl': torrent_id}
try:
headers = {'Referer': url}
r = self.session.get(url=downloadurl, cookies=cookie, headers=headers, timeout=self.timeout)
r = self.session.post(url=downloadurl, cookies=cookie, headers=headers,
timeout=self.timeout)
return r.content
except Exception as e:
logger.error('Error getting torrent: %s', e)
return False
#TODO get this working in utorrent.py
# TODO get this working in utorrent.py
def utorrent_add_file(self, data):
host = headphones.CONFIG.UTORRENT_HOST
@@ -197,7 +199,8 @@ class Rutracker(object):
base_url = host
url = base_url + '/gui/'
self.session.auth = (headphones.CONFIG.UTORRENT_USERNAME, headphones.CONFIG.UTORRENT_PASSWORD)
self.session.auth = (
headphones.CONFIG.UTORRENT_USERNAME, headphones.CONFIG.UTORRENT_PASSWORD)
try:
r = self.session.get(url + 'token.html')
@@ -221,4 +224,3 @@ class Rutracker(object):
self.session.post(url, params={'action': 'add-file'}, files=files)
except Exception as e:
logger.exception('Error adding file to utorrent %s', e)

View File

@@ -13,26 +13,24 @@
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
#####################################
## Stolen from Sick-Beard's sab.py ##
#####################################
###################################
# Stolen from Sick-Beard's sab.py #
###################################
import MultipartPostHandler
import headphones
import cookielib
import httplib
import headphones
from headphones.common import USER_AGENT
from headphones import logger, helpers, request
def sab_api_call(request_type=None, params={}, **kwargs):
if not headphones.CONFIG.SAB_HOST.startswith('http'):
headphones.CONFIG.SAB_HOST = 'http://' + headphones.CONFIG.SAB_HOST
if headphones.CONFIG.SAB_HOST.endswith('/'):
headphones.CONFIG.SAB_HOST = headphones.CONFIG.SAB_HOST[0:len(headphones.CONFIG.SAB_HOST) - 1]
headphones.CONFIG.SAB_HOST = headphones.CONFIG.SAB_HOST[
0:len(headphones.CONFIG.SAB_HOST) - 1]
url = headphones.CONFIG.SAB_HOST + "/" + "api?"
@@ -42,11 +40,11 @@ def sab_api_call(request_type=None, params={}, **kwargs):
params['ma_password'] = headphones.CONFIG.SAB_PASSWORD
if headphones.CONFIG.SAB_APIKEY:
params['apikey'] = headphones.CONFIG.SAB_APIKEY
if request_type=='send_nzb' and headphones.CONFIG.SAB_CATEGORY:
if request_type == 'send_nzb' and headphones.CONFIG.SAB_CATEGORY:
params['cat'] = headphones.CONFIG.SAB_CATEGORY
params['output']='json'
params['output'] = 'json'
response = request.request_json(url, params=params, **kwargs)
@@ -57,8 +55,8 @@ def sab_api_call(request_type=None, params={}, **kwargs):
logger.debug("Successfully connected to SABnzbd on url: %s" % headphones.CONFIG.SAB_HOST)
return response
def sendNZB(nzb):
def sendNZB(nzb):
params = {}
# if it's a normal result we just pass SAB the URL
if nzb.resultType == "nzb":
@@ -87,7 +85,8 @@ def sendNZB(nzb):
response = sab_api_call('send_nzb', params=params)
elif nzb.resultType == "nzbdata":
cookies = cookielib.CookieJar()
response = sab_api_call('send_nzb', params=params, method="post", files=files, cookies=cookies, headers=headers)
response = sab_api_call('send_nzb', params=params, method="post", files=files,
cookies=cookies, headers=headers)
if not response:
logger.info(u"No data returned from SABnzbd, NZB not sent")
@@ -102,15 +101,15 @@ def sendNZB(nzb):
def checkConfig():
params = {'mode': 'get_config',
'section': 'misc',
}
'section': 'misc',
}
config_options = sab_api_call(params=params)
if not config_options:
logger.warn("Unable to read SABnzbd config file - cannot determine renaming options (might affect auto & forced post processing)")
logger.warn(
"Unable to read SABnzbd config file - cannot determine renaming options (might affect auto & forced post processing)")
return (0, 0)
replace_spaces = config_options['config']['misc']['replace_spaces']

File diff suppressed because it is too large Load Diff

View File

@@ -13,13 +13,14 @@
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
from headphones import db, utorrent, transmission, logger
import threading
from headphones import db, utorrent, transmission, logger
import headphones
postprocessor_lock = threading.Lock()
def checkTorrentFinished():
"""
Remove Torrent + data if Post Processed and finished Seeding
@@ -41,6 +42,7 @@ def checkTorrentFinished():
torrent_removed = utorrent.removeTorrent(hash, True)
if torrent_removed:
myDB.action('DELETE from snatched WHERE status = "Seed_Processed" and AlbumID=?', [albumid])
myDB.action('DELETE from snatched WHERE status = "Seed_Processed" and AlbumID=?',
[albumid])
logger.info("Checking finished torrents completed")

View File

@@ -13,14 +13,15 @@
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
from headphones import logger, request
import time
import json
import base64
import urlparse
from headphones import logger, request
import headphones
# This is just a simple script to send torrents to transmission. The
# intention is to turn this into a class where we can check the state
# of the download, set the download dir, etc.
@@ -31,7 +32,7 @@ import headphones
def addTorrent(link, data=None):
method = 'torrent-add'
if link.endswith('.torrent') or data:
if link.endswith('.torrent') and not link.startswith('http') or data:
if data:
metainfo = str(base64.b64encode(data))
else:
@@ -96,7 +97,6 @@ def setSeedRatio(torrentid, ratio):
def removeTorrent(torrentid, remove_data=False):
method = 'torrent-get'
arguments = {'ids': torrentid, 'fields': ['isFinished', 'name']}
@@ -118,7 +118,8 @@ def removeTorrent(torrentid, remove_data=False):
response = torrentAction(method, arguments)
return True
else:
logger.info('%s has not finished seeding yet, torrent will not be removed, will try again on next run' % name)
logger.info(
'%s has not finished seeding yet, torrent will not be removed, will try again on next run' % name)
except:
return False
@@ -126,7 +127,6 @@ def removeTorrent(torrentid, remove_data=False):
def torrentAction(method, arguments):
host = headphones.CONFIG.TRANSMISSION_HOST
username = headphones.CONFIG.TRANSMISSION_USERNAME
password = headphones.CONFIG.TRANSMISSION_PASSWORD
@@ -152,7 +152,7 @@ def torrentAction(method, arguments):
# Retrieve session id
auth = (username, password) if username and password else None
response = request.request_response(host, auth=auth,
whitelist_status_code=[401, 409])
whitelist_status_code=[401, 409])
if response is None:
logger.error("Error gettings Transmission session ID")
@@ -162,7 +162,7 @@ def torrentAction(method, arguments):
if response.status_code == 401:
if auth:
logger.error("Username and/or password not accepted by " \
"Transmission")
"Transmission")
else:
logger.error("Transmission authorization required")
@@ -179,7 +179,7 @@ def torrentAction(method, arguments):
data = {'method': method, 'arguments': arguments}
response = request.request_json(host, method="POST", data=json.dumps(data),
headers=headers, auth=auth)
headers=headers, auth=auth)
print response

View File

@@ -17,10 +17,10 @@ from headphones import logger, db, importer
def dbUpdate(forcefull=False):
myDB = db.DBConnection()
active_artists = myDB.select('SELECT ArtistID, ArtistName from artists WHERE Status="Active" or Status="Loading" order by LastUpdated ASC')
active_artists = myDB.select(
'SELECT ArtistID, ArtistName from artists WHERE Status="Active" or Status="Loading" order by LastUpdated ASC')
logger.info('Starting update for %i active artists', len(active_artists))
for artist in active_artists:

View File

@@ -14,26 +14,24 @@
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
import urllib
import json
import time
from collections import namedtuple
import urllib2
import urlparse
import cookielib
import json
import re
import os
import time
import headphones
from headphones import logger
from collections import namedtuple
class utorrentclient(object):
TOKEN_REGEX = "<div id='token' style='display:none;'>([^<>]+)</div>"
UTSetting = namedtuple("UTSetting", ["name", "int", "str", "access"])
def __init__(self, base_url=None, username=None, password=None,):
def __init__(self, base_url=None, username=None, password=None, ):
host = headphones.CONFIG.UTORRENT_HOST
if not host.startswith('http'):
@@ -50,7 +48,7 @@ class utorrentclient(object):
self.password = headphones.CONFIG.UTORRENT_PASSWORD
self.opener = self._make_opener('uTorrent', self.base_url, self.username, self.password)
self.token = self._get_token()
#TODO refresh token, when necessary
# TODO refresh token, when necessary
def _make_opener(self, realm, base_url, username, password):
"""uTorrent API need HTTP Basic Auth and cookie support for token verify."""
@@ -83,7 +81,7 @@ class utorrentclient(object):
return self._action(params)
def add_url(self, url):
#can receive magnet or normal .torrent link
# can receive magnet or normal .torrent link
params = [('action', 'add-url'), ('s', url)]
return self._action(params)
@@ -181,13 +179,15 @@ def removeTorrent(hash, remove_data=False):
status, torrentList = uTorrentClient.list()
torrents = torrentList['torrents']
for torrent in torrents:
if torrent[0].lower() == hash:
if torrent[0].upper() == hash.upper():
if torrent[21] == 'Finished':
logger.info('%s has finished seeding, removing torrent and data' % torrent[2])
uTorrentClient.remove(hash, remove_data)
return True
else:
logger.info('%s has not finished seeding yet, torrent will not be removed, will try again on next run' % torrent[2])
logger.info(
'%s has not finished seeding yet, torrent will not be removed, will try again on next run' %
torrent[2])
return False
return False
@@ -203,7 +203,6 @@ def setSeedRatio(hash, ratio):
def dirTorrent(hash, cacheid=None, return_name=None):
uTorrentClient = utorrentclient()
if not cacheid:
@@ -228,19 +227,20 @@ def dirTorrent(hash, cacheid=None, return_name=None):
return None, None
def addTorrent(link):
uTorrentClient = utorrentclient()
uTorrentClient.add_url(link)
def getFolder(hash):
uTorrentClient = utorrentclient()
# Get Active Directory from settings
active_dir, completed_dir = getSettingsDirectories()
if not active_dir:
logger.error('Could not get "Put new downloads in:" directory from uTorrent settings, please ensure it is set')
logger.error(
'Could not get "Put new downloads in:" directory from uTorrent settings, please ensure it is set')
return None
# Get Torrent Folder Name

View File

@@ -13,18 +13,17 @@
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
import re
import os
import tarfile
import platform
import headphones
import subprocess
import re
import os
import headphones
from headphones import logger, version, request
def runGit(args):
if headphones.CONFIG.GIT_PATH:
git_locations = ['"' + headphones.CONFIG.GIT_PATH + '"']
else:
@@ -40,7 +39,8 @@ def runGit(args):
try:
logger.debug('Trying to execute: "' + cmd + '" with shell in ' + headphones.PROG_DIR)
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True, cwd=headphones.PROG_DIR)
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True,
cwd=headphones.PROG_DIR)
output, err = p.communicate()
output = output.strip()
@@ -62,7 +62,6 @@ def runGit(args):
def getVersion():
if version.HEADPHONES_VERSION.startswith('win32build'):
headphones.INSTALL_TYPE = 'win'
@@ -92,7 +91,8 @@ def getVersion():
branch_name = branch_name
if not branch_name and headphones.CONFIG.GIT_BRANCH:
logger.error('Could not retrieve branch name from git. Falling back to %s' % headphones.CONFIG.GIT_BRANCH)
logger.error(
'Could not retrieve branch name from git. Falling back to %s' % headphones.CONFIG.GIT_BRANCH)
branch_name = headphones.CONFIG.GIT_BRANCH
if not branch_name:
logger.error('Could not retrieve branch name from git. Defaulting to master')
@@ -123,11 +123,13 @@ def checkGithub():
# Get the latest version available from github
logger.info('Retrieving latest version information from GitHub')
url = 'https://api.github.com/repos/%s/headphones/commits/%s' % (headphones.CONFIG.GIT_USER, headphones.CONFIG.GIT_BRANCH)
url = 'https://api.github.com/repos/%s/headphones/commits/%s' % (
headphones.CONFIG.GIT_USER, headphones.CONFIG.GIT_BRANCH)
version = request.request_json(url, timeout=20, validator=lambda x: type(x) == dict)
if version is None:
logger.warn('Could not get the latest version from GitHub. Are you running a local development version?')
logger.warn(
'Could not get the latest version from GitHub. Are you running a local development version?')
return headphones.CURRENT_VERSION
headphones.LATEST_VERSION = version['sha']
@@ -135,7 +137,8 @@ def checkGithub():
# See how many commits behind we are
if not headphones.CURRENT_VERSION:
logger.info('You are running an unknown version of Headphones. Run the updater to identify your version')
logger.info(
'You are running an unknown version of Headphones. Run the updater to identify your version')
return headphones.LATEST_VERSION
if headphones.LATEST_VERSION == headphones.CURRENT_VERSION:
@@ -143,8 +146,10 @@ def checkGithub():
return headphones.LATEST_VERSION
logger.info('Comparing currently installed version with latest GitHub version')
url = 'https://api.github.com/repos/%s/headphones/compare/%s...%s' % (headphones.CONFIG.GIT_USER, headphones.LATEST_VERSION, headphones.CURRENT_VERSION)
commits = request.request_json(url, timeout=20, whitelist_status_code=404, validator=lambda x: type(x) == dict)
url = 'https://api.github.com/repos/%s/headphones/compare/%s...%s' % (
headphones.CONFIG.GIT_USER, headphones.LATEST_VERSION, headphones.CURRENT_VERSION)
commits = request.request_json(url, timeout=20, whitelist_status_code=404,
validator=lambda x: type(x) == dict)
if commits is None:
logger.warn('Could not get commits behind from GitHub.')
@@ -158,7 +163,8 @@ def checkGithub():
headphones.COMMITS_BEHIND = 0
if headphones.COMMITS_BEHIND > 0:
logger.info('New version is available. You are %s commits behind' % headphones.COMMITS_BEHIND)
logger.info(
'New version is available. You are %s commits behind' % headphones.COMMITS_BEHIND)
elif headphones.COMMITS_BEHIND == 0:
logger.info('Headphones is up to date')
@@ -185,7 +191,8 @@ def update():
logger.info('Output: ' + str(output))
else:
tar_download_url = 'https://github.com/%s/headphones/tarball/%s' % (headphones.CONFIG.GIT_USER, headphones.CONFIG.GIT_BRANCH)
tar_download_url = 'https://github.com/%s/headphones/tarball/%s' % (
headphones.CONFIG.GIT_USER, headphones.CONFIG.GIT_BRANCH)
update_dir = os.path.join(headphones.PROG_DIR, 'update')
version_path = os.path.join(headphones.PROG_DIR, 'version.txt')
@@ -214,7 +221,8 @@ def update():
os.remove(tar_download_path)
# Find update dir name
update_dir_contents = [x for x in os.listdir(update_dir) if os.path.isdir(os.path.join(update_dir, x))]
update_dir_contents = [x for x in os.listdir(update_dir) if
os.path.isdir(os.path.join(update_dir, x))]
if len(update_dir_contents) != 1:
logger.error("Invalid update data, update failed: " + str(update_dir_contents))
return

View File

@@ -15,18 +15,8 @@
# NZBGet support added by CurlyMo <curlymoo1@gmail.com> as a part of XBian - XBMC on the Raspberry Pi
from headphones import logger, searcher, db, importer, mb, lastfm, librarysync, helpers, notifiers
from headphones.helpers import checked, radio, today, cleanName
from mako.lookup import TemplateLookup
from mako import exceptions
from operator import itemgetter
import headphones
import threading
import cherrypy
import urllib2
import hashlib
import random
import urllib
@@ -34,8 +24,16 @@ import json
import time
import cgi
import sys
import urllib2
import os
import re
from headphones import logger, searcher, db, importer, mb, lastfm, librarysync, helpers, notifiers
from headphones.helpers import checked, radio, today, cleanName
from mako.lookup import TemplateLookup
from mako import exceptions
import headphones
import cherrypy
try:
# pylint:disable=E0611
@@ -48,7 +46,6 @@ except ImportError:
def serve_template(templatename, **kwargs):
interface_dir = os.path.join(str(headphones.PROG_DIR), 'data/interfaces/')
template_dir = os.path.join(str(interface_dir), headphones.CONFIG.INTERFACE)
@@ -62,7 +59,6 @@ def serve_template(templatename, **kwargs):
class WebInterface(object):
@cherrypy.expose
def index(self):
raise cherrypy.HTTPRedirect("home")
@@ -90,7 +86,8 @@ class WebInterface(object):
if not artist:
raise cherrypy.HTTPRedirect("home")
albums = myDB.select('SELECT * from albums WHERE ArtistID=? order by ReleaseDate DESC', [ArtistID])
albums = myDB.select('SELECT * from albums WHERE ArtistID=? order by ReleaseDate DESC',
[ArtistID])
# Serve the extras up as a dict to make things easier for new templates (append new extras to the end)
extras_list = headphones.POSSIBLE_EXTRAS
@@ -109,7 +106,8 @@ class WebInterface(object):
extras_dict[extra] = ""
i += 1
return serve_template(templatename="artist.html", title=artist['ArtistName'], artist=artist, albums=albums, extras=extras_dict)
return serve_template(templatename="artist.html", title=artist['ArtistName'], artist=artist,
albums=albums, extras=extras_dict)
@cherrypy.expose
def albumPage(self, AlbumID):
@@ -128,8 +126,10 @@ class WebInterface(object):
if not album:
raise cherrypy.HTTPRedirect("home")
tracks = myDB.select('SELECT * from tracks WHERE AlbumID=? ORDER BY CAST(TrackNumber AS INTEGER)', [AlbumID])
description = myDB.action('SELECT * from descriptions WHERE ReleaseGroupID=?', [AlbumID]).fetchone()
tracks = myDB.select(
'SELECT * from tracks WHERE AlbumID=? ORDER BY CAST(TrackNumber AS INTEGER)', [AlbumID])
description = myDB.action('SELECT * from descriptions WHERE ReleaseGroupID=?',
[AlbumID]).fetchone()
if not album['ArtistName']:
title = ' - '
@@ -139,7 +139,8 @@ class WebInterface(object):
title = title + ""
else:
title = title + album['AlbumTitle']
return serve_template(templatename="album.html", title=title, album=album, tracks=tracks, description=description)
return serve_template(templatename="album.html", title=title, album=album, tracks=tracks,
description=description)
@cherrypy.expose
def search(self, name, type):
@@ -151,7 +152,9 @@ class WebInterface(object):
searchresults = mb.findRelease(name, limit=100)
else:
searchresults = mb.findSeries(name, limit=100)
return serve_template(templatename="searchresults.html", title='Search Results for: "' + cgi.escape(name) + '"', searchresults=searchresults, name=cgi.escape(name), type=type)
return serve_template(templatename="searchresults.html",
title='Search Results for: "' + cgi.escape(name) + '"',
searchresults=searchresults, name=cgi.escape(name), type=type)
@cherrypy.expose
def addArtist(self, artistid):
@@ -162,7 +165,8 @@ class WebInterface(object):
@cherrypy.expose
def addSeries(self, seriesid):
thread = threading.Thread(target=importer.addArtisttoDB, args=[seriesid, False, False, "series"])
thread = threading.Thread(target=importer.addArtisttoDB,
args=[seriesid, False, False, "series"])
thread.start()
thread.join(1)
raise cherrypy.HTTPRedirect("artistPage?ArtistID=%s" % seriesid)
@@ -200,12 +204,18 @@ class WebInterface(object):
controlValueDict = {'ArtistID': ArtistID}
newValueDict = {'IncludeExtras': 0}
myDB.upsert("artists", newValueDict, controlValueDict)
extraalbums = myDB.select('SELECT AlbumID from albums WHERE ArtistID=? AND Status="Skipped" AND Type!="Album"', [ArtistID])
extraalbums = myDB.select(
'SELECT AlbumID from albums WHERE ArtistID=? AND Status="Skipped" AND Type!="Album"',
[ArtistID])
for album in extraalbums:
myDB.action('DELETE from tracks WHERE ArtistID=? AND AlbumID=?', [ArtistID, album['AlbumID']])
myDB.action('DELETE from albums WHERE ArtistID=? AND AlbumID=?', [ArtistID, album['AlbumID']])
myDB.action('DELETE from allalbums WHERE ArtistID=? AND AlbumID=?', [ArtistID, album['AlbumID']])
myDB.action('DELETE from alltracks WHERE ArtistID=? AND AlbumID=?', [ArtistID, album['AlbumID']])
myDB.action('DELETE from tracks WHERE ArtistID=? AND AlbumID=?',
[ArtistID, album['AlbumID']])
myDB.action('DELETE from albums WHERE ArtistID=? AND AlbumID=?',
[ArtistID, album['AlbumID']])
myDB.action('DELETE from allalbums WHERE ArtistID=? AND AlbumID=?',
[ArtistID, album['AlbumID']])
myDB.action('DELETE from alltracks WHERE ArtistID=? AND AlbumID=?',
[ArtistID, album['AlbumID']])
myDB.action('DELETE from releases WHERE ReleaseGroupID=?', [album['AlbumID']])
from headphones import cache
c = cache.Cache()
@@ -242,7 +252,9 @@ class WebInterface(object):
from headphones import cache
c = cache.Cache()
rgids = myDB.select('SELECT AlbumID FROM albums WHERE ArtistID=? UNION SELECT AlbumID FROM allalbums WHERE ArtistID=?', [ArtistID, ArtistID])
rgids = myDB.select(
'SELECT AlbumID FROM albums WHERE ArtistID=? UNION SELECT AlbumID FROM allalbums WHERE ArtistID=?',
[ArtistID, ArtistID])
for rgid in rgids:
albumid = rgid['AlbumID']
myDB.action('DELETE from releases WHERE ReleaseGroupID=?', [albumid])
@@ -269,17 +281,19 @@ class WebInterface(object):
def scanArtist(self, ArtistID):
myDB = db.DBConnection()
artist_name = myDB.select('SELECT DISTINCT ArtistName FROM artists WHERE ArtistID=?', [ArtistID])[0][0]
artist_name = \
myDB.select('SELECT DISTINCT ArtistName FROM artists WHERE ArtistID=?', [ArtistID])[0][0]
logger.info(u"Scanning artist: %s", artist_name)
full_folder_format = headphones.CONFIG.FOLDER_FORMAT
folder_format = re.findall(r'(.*?[Aa]rtist?)\.*', full_folder_format)[0]
acceptable_formats = ["$artist","$sortartist","$first/$artist","$first/$sortartist"]
acceptable_formats = ["$artist", "$sortartist", "$first/$artist", "$first/$sortartist"]
if not folder_format.lower() in acceptable_formats:
logger.info("Can't determine the artist folder from the configured folder_format. Not scanning")
logger.info(
"Can't determine the artist folder from the configured folder_format. Not scanning")
return
# Format the folder to match the settings
@@ -299,12 +313,12 @@ class WebInterface(object):
firstchar = sortname[0]
values = {'$Artist': artist,
'$SortArtist': sortname,
'$First': firstchar.upper(),
'$artist': artist.lower(),
'$sortartist': sortname.lower(),
'$first': firstchar.lower(),
}
'$SortArtist': sortname,
'$First': firstchar.upper(),
'$artist': artist.lower(),
'$sortartist': sortname.lower(),
'$first': firstchar.lower(),
}
folder = helpers.replace_all(folder_format.strip(), values, normalize=True)
@@ -332,14 +346,17 @@ class WebInterface(object):
if not os.path.isdir(artistfolder):
logger.debug("Cannot find directory: " + artistfolder)
continue
threading.Thread(target=librarysync.libraryScan, kwargs={"dir":artistfolder, "artistScan":True, "ArtistID":ArtistID, "ArtistName":artist_name}).start()
threading.Thread(target=librarysync.libraryScan,
kwargs={"dir": artistfolder, "artistScan": True, "ArtistID": ArtistID,
"ArtistName": artist_name}).start()
raise cherrypy.HTTPRedirect("artistPage?ArtistID=%s" % ArtistID)
@cherrypy.expose
def deleteEmptyArtists(self):
logger.info(u"Deleting all empty artists")
myDB = db.DBConnection()
emptyArtistIDs = [row['ArtistID'] for row in myDB.select("SELECT ArtistID FROM artists WHERE LatestAlbum IS NULL")]
emptyArtistIDs = [row['ArtistID'] for row in
myDB.select("SELECT ArtistID FROM artists WHERE LatestAlbum IS NULL")]
for ArtistID in emptyArtistIDs:
self.removeArtist(ArtistID)
@@ -371,8 +388,11 @@ class WebInterface(object):
if ArtistID:
ArtistIDT = ArtistID
else:
ArtistIDT = myDB.action('SELECT ArtistID FROM albums WHERE AlbumID=?', [mbid]).fetchone()[0]
myDB.action('UPDATE artists SET TotalTracks=(SELECT COUNT(*) FROM tracks WHERE ArtistID = ? AND AlbumTitle IN (SELECT AlbumTitle FROM albums WHERE Status != "Ignored")) WHERE ArtistID = ?', [ArtistIDT, ArtistIDT])
ArtistIDT = \
myDB.action('SELECT ArtistID FROM albums WHERE AlbumID=?', [mbid]).fetchone()[0]
myDB.action(
'UPDATE artists SET TotalTracks=(SELECT COUNT(*) FROM tracks WHERE ArtistID = ? AND AlbumTitle IN (SELECT AlbumTitle FROM albums WHERE Status != "Ignored")) WHERE ArtistID = ?',
[ArtistIDT, ArtistIDT])
if ArtistID:
raise cherrypy.HTTPRedirect("artistPage?ArtistID=%s" % ArtistID)
else:
@@ -385,8 +405,10 @@ class WebInterface(object):
if action == "ignore":
myDB = db.DBConnection()
for artist in args:
myDB.action('DELETE FROM newartists WHERE ArtistName=?', [artist.decode(headphones.SYS_ENCODING, 'replace')])
myDB.action('UPDATE have SET Matched="Ignored" WHERE ArtistName=?', [artist.decode(headphones.SYS_ENCODING, 'replace')])
myDB.action('DELETE FROM newartists WHERE ArtistName=?',
[artist.decode(headphones.SYS_ENCODING, 'replace')])
myDB.action('UPDATE have SET Matched="Ignored" WHERE ArtistName=?',
[artist.decode(headphones.SYS_ENCODING, 'replace')])
logger.info("Artist %s removed from new artist list and set to ignored" % artist)
raise cherrypy.HTTPRedirect("home")
@@ -440,12 +462,12 @@ class WebInterface(object):
(data, bestqual) = searcher.preprocess(result)
if data and bestqual:
myDB = db.DBConnection()
album = myDB.action('SELECT * from albums WHERE AlbumID=?', [AlbumID]).fetchone()
searcher.send_to_downloader(data, bestqual, album)
return json.dumps({'result':'success'})
myDB = db.DBConnection()
album = myDB.action('SELECT * from albums WHERE AlbumID=?', [AlbumID]).fetchone()
searcher.send_to_downloader(data, bestqual, album)
return json.dumps({'result': 'success'})
else:
return json.dumps({'result':'failure'})
return json.dumps({'result': 'failure'})
@cherrypy.expose
def unqueueAlbum(self, AlbumID, ArtistID):
@@ -462,10 +484,12 @@ class WebInterface(object):
myDB = db.DBConnection()
myDB.action('DELETE from have WHERE Matched=?', [AlbumID])
album = myDB.action('SELECT ArtistID, ArtistName, AlbumTitle from albums where AlbumID=?', [AlbumID]).fetchone()
album = myDB.action('SELECT ArtistID, ArtistName, AlbumTitle from albums where AlbumID=?',
[AlbumID]).fetchone()
if album:
ArtistID = album['ArtistID']
myDB.action('DELETE from have WHERE ArtistName=? AND AlbumTitle=?', [album['ArtistName'], album['AlbumTitle']])
myDB.action('DELETE from have WHERE ArtistName=? AND AlbumTitle=?',
[album['ArtistName'], album['AlbumTitle']])
myDB.action('DELETE from albums WHERE AlbumID=?', [AlbumID])
myDB.action('DELETE from tracks WHERE AlbumID=?', [AlbumID])
@@ -505,9 +529,11 @@ class WebInterface(object):
@cherrypy.expose
def upcoming(self):
myDB = db.DBConnection()
upcoming = myDB.select("SELECT * from albums WHERE ReleaseDate > date('now') order by ReleaseDate ASC")
upcoming = myDB.select(
"SELECT * from albums WHERE ReleaseDate > date('now') order by ReleaseDate ASC")
wanted = myDB.select("SELECT * from albums WHERE Status='Wanted'")
return serve_template(templatename="upcoming.html", title="Upcoming", upcoming=upcoming, wanted=wanted)
return serve_template(templatename="upcoming.html", title="Upcoming", upcoming=upcoming,
wanted=wanted)
@cherrypy.expose
def manage(self):
@@ -519,7 +545,8 @@ class WebInterface(object):
def manageArtists(self):
myDB = db.DBConnection()
artists = myDB.select('SELECT * from artists order by ArtistSortName COLLATE NOCASE')
return serve_template(templatename="manageartists.html", title="Manage Artists", artists=artists)
return serve_template(templatename="manageartists.html", title="Manage Artists",
artists=artists)
@cherrypy.expose
def manageAlbums(self, Status=None):
@@ -530,87 +557,115 @@ class WebInterface(object):
albums = myDB.select('SELECT * from albums WHERE Status=?', [Status])
else:
albums = myDB.select('SELECT * from albums')
return serve_template(templatename="managealbums.html", title="Manage Albums", albums=albums)
return serve_template(templatename="managealbums.html", title="Manage Albums",
albums=albums)
@cherrypy.expose
def manageNew(self):
myDB = db.DBConnection()
newartists = myDB.select('SELECT * from newartists')
return serve_template(templatename="managenew.html", title="Manage New Artists", newartists=newartists)
return serve_template(templatename="managenew.html", title="Manage New Artists",
newartists=newartists)
@cherrypy.expose
def manageUnmatched(self):
myDB = db.DBConnection()
have_album_dictionary = []
headphones_album_dictionary = []
have_albums = myDB.select('SELECT ArtistName, AlbumTitle, TrackTitle, CleanName from have WHERE Matched = "Failed" GROUP BY AlbumTitle ORDER BY ArtistName')
have_albums = myDB.select(
'SELECT ArtistName, AlbumTitle, TrackTitle, CleanName from have WHERE Matched = "Failed" GROUP BY AlbumTitle ORDER BY ArtistName')
for albums in have_albums:
#Have to skip over manually matched tracks
# Have to skip over manually matched tracks
if albums['ArtistName'] and albums['AlbumTitle'] and albums['TrackTitle']:
original_clean = helpers.cleanName(albums['ArtistName'] + " " + albums['AlbumTitle'] + " " + albums['TrackTitle'])
# else:
# original_clean = None
original_clean = helpers.cleanName(
albums['ArtistName'] + " " + albums['AlbumTitle'] + " " + albums['TrackTitle'])
# else:
# original_clean = None
if original_clean == albums['CleanName']:
have_dict = {'ArtistName': albums['ArtistName'], 'AlbumTitle': albums['AlbumTitle']}
have_dict = {'ArtistName': albums['ArtistName'],
'AlbumTitle': albums['AlbumTitle']}
have_album_dictionary.append(have_dict)
headphones_albums = myDB.select('SELECT ArtistName, AlbumTitle from albums ORDER BY ArtistName')
headphones_albums = myDB.select(
'SELECT ArtistName, AlbumTitle from albums ORDER BY ArtistName')
for albums in headphones_albums:
if albums['ArtistName'] and albums['AlbumTitle']:
headphones_dict = {'ArtistName': albums['ArtistName'], 'AlbumTitle': albums['AlbumTitle']}
headphones_dict = {'ArtistName': albums['ArtistName'],
'AlbumTitle': albums['AlbumTitle']}
headphones_album_dictionary.append(headphones_dict)
#unmatchedalbums = [f for f in have_album_dictionary if f not in [x for x in headphones_album_dictionary]]
# unmatchedalbums = [f for f in have_album_dictionary if f not in [x for x in headphones_album_dictionary]]
check = set([(cleanName(d['ArtistName']).lower(), cleanName(d['AlbumTitle']).lower()) for d in headphones_album_dictionary])
unmatchedalbums = [d for d in have_album_dictionary if (cleanName(d['ArtistName']).lower(), cleanName(d['AlbumTitle']).lower()) not in check]
check = set(
[(cleanName(d['ArtistName']).lower(), cleanName(d['AlbumTitle']).lower()) for d in
headphones_album_dictionary])
unmatchedalbums = [d for d in have_album_dictionary if (
cleanName(d['ArtistName']).lower(), cleanName(d['AlbumTitle']).lower()) not in check]
return serve_template(templatename="manageunmatched.html", title="Manage Unmatched Items", unmatchedalbums=unmatchedalbums)
return serve_template(templatename="manageunmatched.html", title="Manage Unmatched Items",
unmatchedalbums=unmatchedalbums)
@cherrypy.expose
def markUnmatched(self, action=None, existing_artist=None, existing_album=None, new_artist=None, new_album=None):
def markUnmatched(self, action=None, existing_artist=None, existing_album=None, new_artist=None,
new_album=None):
myDB = db.DBConnection()
if action == "ignoreArtist":
artist = existing_artist
myDB.action('UPDATE have SET Matched="Ignored" WHERE ArtistName=? AND Matched = "Failed"', [artist])
myDB.action(
'UPDATE have SET Matched="Ignored" WHERE ArtistName=? AND Matched = "Failed"',
[artist])
elif action == "ignoreAlbum":
artist = existing_artist
album = existing_album
myDB.action('UPDATE have SET Matched="Ignored" WHERE ArtistName=? AND AlbumTitle=? AND Matched = "Failed"', (artist, album))
myDB.action(
'UPDATE have SET Matched="Ignored" WHERE ArtistName=? AND AlbumTitle=? AND Matched = "Failed"',
(artist, album))
elif action == "matchArtist":
existing_artist_clean = helpers.cleanName(existing_artist).lower()
new_artist_clean = helpers.cleanName(new_artist).lower()
if new_artist_clean != existing_artist_clean:
have_tracks = myDB.action('SELECT Matched, CleanName, Location, BitRate, Format FROM have WHERE ArtistName=?', [existing_artist])
have_tracks = myDB.action(
'SELECT Matched, CleanName, Location, BitRate, Format FROM have WHERE ArtistName=?',
[existing_artist])
update_count = 0
for entry in have_tracks:
old_clean_filename = entry['CleanName']
if old_clean_filename.startswith(existing_artist_clean):
new_clean_filename = old_clean_filename.replace(existing_artist_clean, new_artist_clean, 1)
myDB.action('UPDATE have SET CleanName=? WHERE ArtistName=? AND CleanName=?', [new_clean_filename, existing_artist, old_clean_filename])
new_clean_filename = old_clean_filename.replace(existing_artist_clean,
new_artist_clean, 1)
myDB.action(
'UPDATE have SET CleanName=? WHERE ArtistName=? AND CleanName=?',
[new_clean_filename, existing_artist, old_clean_filename])
controlValueDict = {"CleanName": new_clean_filename}
newValueDict = {"Location": entry['Location'],
"BitRate": entry['BitRate'],
"Format": entry['Format']
}
#Attempt to match tracks with new CleanName
match_alltracks = myDB.action('SELECT CleanName from alltracks WHERE CleanName=?', [new_clean_filename]).fetchone()
# Attempt to match tracks with new CleanName
match_alltracks = myDB.action(
'SELECT CleanName from alltracks WHERE CleanName=?',
[new_clean_filename]).fetchone()
if match_alltracks:
myDB.upsert("alltracks", newValueDict, controlValueDict)
match_tracks = myDB.action('SELECT CleanName, AlbumID from tracks WHERE CleanName=?', [new_clean_filename]).fetchone()
match_tracks = myDB.action(
'SELECT CleanName, AlbumID from tracks WHERE CleanName=?',
[new_clean_filename]).fetchone()
if match_tracks:
myDB.upsert("tracks", newValueDict, controlValueDict)
myDB.action('UPDATE have SET Matched="Manual" WHERE CleanName=?', [new_clean_filename])
myDB.action('UPDATE have SET Matched="Manual" WHERE CleanName=?',
[new_clean_filename])
update_count += 1
#This was throwing errors and I don't know why, but it seems to be working fine.
#else:
#logger.info("There was an error modifying Artist %s. This should not have happened" % existing_artist)
logger.info("Manual matching yielded %s new matches for Artist: %s" % (update_count, new_artist))
# This was throwing errors and I don't know why, but it seems to be working fine.
# else:
# logger.info("There was an error modifying Artist %s. This should not have happened" % existing_artist)
logger.info("Manual matching yielded %s new matches for Artist: %s" % (
update_count, new_artist))
if update_count > 0:
librarysync.update_album_status()
else:
logger.info("Artist %s already named appropriately; nothing to modify" % existing_artist)
logger.info(
"Artist %s already named appropriately; nothing to modify" % existing_artist)
elif action == "matchAlbum":
existing_artist_clean = helpers.cleanName(existing_artist).lower()
@@ -620,83 +675,115 @@ class WebInterface(object):
existing_clean_string = existing_artist_clean + " " + existing_album_clean
new_clean_string = new_artist_clean + " " + new_album_clean
if existing_clean_string != new_clean_string:
have_tracks = myDB.action('SELECT Matched, CleanName, Location, BitRate, Format FROM have WHERE ArtistName=? AND AlbumTitle=?', (existing_artist, existing_album))
have_tracks = myDB.action(
'SELECT Matched, CleanName, Location, BitRate, Format FROM have WHERE ArtistName=? AND AlbumTitle=?',
(existing_artist, existing_album))
update_count = 0
for entry in have_tracks:
old_clean_filename = entry['CleanName']
if old_clean_filename.startswith(existing_clean_string):
new_clean_filename = old_clean_filename.replace(existing_clean_string, new_clean_string, 1)
myDB.action('UPDATE have SET CleanName=? WHERE ArtistName=? AND AlbumTitle=? AND CleanName=?', [new_clean_filename, existing_artist, existing_album, old_clean_filename])
new_clean_filename = old_clean_filename.replace(existing_clean_string,
new_clean_string, 1)
myDB.action(
'UPDATE have SET CleanName=? WHERE ArtistName=? AND AlbumTitle=? AND CleanName=?',
[new_clean_filename, existing_artist, existing_album,
old_clean_filename])
controlValueDict = {"CleanName": new_clean_filename}
newValueDict = {"Location": entry['Location'],
"BitRate": entry['BitRate'],
"Format": entry['Format']
}
#Attempt to match tracks with new CleanName
match_alltracks = myDB.action('SELECT CleanName from alltracks WHERE CleanName=?', [new_clean_filename]).fetchone()
# Attempt to match tracks with new CleanName
match_alltracks = myDB.action(
'SELECT CleanName from alltracks WHERE CleanName=?',
[new_clean_filename]).fetchone()
if match_alltracks:
myDB.upsert("alltracks", newValueDict, controlValueDict)
match_tracks = myDB.action('SELECT CleanName, AlbumID from tracks WHERE CleanName=?', [new_clean_filename]).fetchone()
match_tracks = myDB.action(
'SELECT CleanName, AlbumID from tracks WHERE CleanName=?',
[new_clean_filename]).fetchone()
if match_tracks:
myDB.upsert("tracks", newValueDict, controlValueDict)
myDB.action('UPDATE have SET Matched="Manual" WHERE CleanName=?', [new_clean_filename])
myDB.action('UPDATE have SET Matched="Manual" WHERE CleanName=?',
[new_clean_filename])
album_id = match_tracks['AlbumID']
update_count += 1
#This was throwing errors and I don't know why, but it seems to be working fine.
#else:
#logger.info("There was an error modifying Artist %s / Album %s with clean name %s" % (existing_artist, existing_album, existing_clean_string))
logger.info("Manual matching yielded %s new matches for Artist: %s / Album: %s" % (update_count, new_artist, new_album))
# This was throwing errors and I don't know why, but it seems to be working fine.
# else:
# logger.info("There was an error modifying Artist %s / Album %s with clean name %s" % (existing_artist, existing_album, existing_clean_string))
logger.info("Manual matching yielded %s new matches for Artist: %s / Album: %s" % (
update_count, new_artist, new_album))
if update_count > 0:
librarysync.update_album_status(album_id)
else:
logger.info("Artist %s / Album %s already named appropriately; nothing to modify" % (existing_artist, existing_album))
logger.info(
"Artist %s / Album %s already named appropriately; nothing to modify" % (
existing_artist, existing_album))
@cherrypy.expose
def manageManual(self):
myDB = db.DBConnection()
manual_albums = []
manualalbums = myDB.select('SELECT ArtistName, AlbumTitle, TrackTitle, CleanName, Matched from have')
manualalbums = myDB.select(
'SELECT ArtistName, AlbumTitle, TrackTitle, CleanName, Matched from have')
for albums in manualalbums:
if albums['ArtistName'] and albums['AlbumTitle'] and albums['TrackTitle']:
original_clean = helpers.cleanName(albums['ArtistName'] + " " + albums['AlbumTitle'] + " " + albums['TrackTitle'])
if albums['Matched'] == "Ignored" or albums['Matched'] == "Manual" or albums['CleanName'] != original_clean:
original_clean = helpers.cleanName(
albums['ArtistName'] + " " + albums['AlbumTitle'] + " " + albums['TrackTitle'])
if albums['Matched'] == "Ignored" or albums['Matched'] == "Manual" or albums[
'CleanName'] != original_clean:
if albums['Matched'] == "Ignored":
album_status = "Ignored"
elif albums['Matched'] == "Manual" or albums['CleanName'] != original_clean:
album_status = "Matched"
manual_dict = {'ArtistName': albums['ArtistName'], 'AlbumTitle': albums['AlbumTitle'], 'AlbumStatus': album_status}
manual_dict = {'ArtistName': albums['ArtistName'],
'AlbumTitle': albums['AlbumTitle'], 'AlbumStatus': album_status}
if manual_dict not in manual_albums:
manual_albums.append(manual_dict)
manual_albums_sorted = sorted(manual_albums, key=itemgetter('ArtistName', 'AlbumTitle'))
return serve_template(templatename="managemanual.html", title="Manage Manual Items", manualalbums=manual_albums_sorted)
return serve_template(templatename="managemanual.html", title="Manage Manual Items",
manualalbums=manual_albums_sorted)
@cherrypy.expose
def markManual(self, action=None, existing_artist=None, existing_album=None):
myDB = db.DBConnection()
if action == "unignoreArtist":
artist = existing_artist
myDB.action('UPDATE have SET Matched="Failed" WHERE ArtistName=? AND Matched="Ignored"', [artist])
myDB.action('UPDATE have SET Matched="Failed" WHERE ArtistName=? AND Matched="Ignored"',
[artist])
logger.info("Artist: %s successfully restored to unmatched list" % artist)
elif action == "unignoreAlbum":
artist = existing_artist
album = existing_album
myDB.action('UPDATE have SET Matched="Failed" WHERE ArtistName=? AND AlbumTitle=? AND Matched="Ignored"', (artist, album))
myDB.action(
'UPDATE have SET Matched="Failed" WHERE ArtistName=? AND AlbumTitle=? AND Matched="Ignored"',
(artist, album))
logger.info("Album: %s successfully restored to unmatched list" % album)
elif action == "unmatchArtist":
artist = existing_artist
update_clean = myDB.select('SELECT ArtistName, AlbumTitle, TrackTitle, CleanName, Matched from have WHERE ArtistName=?', [artist])
update_clean = myDB.select(
'SELECT ArtistName, AlbumTitle, TrackTitle, CleanName, Matched from have WHERE ArtistName=?',
[artist])
update_count = 0
for tracks in update_clean:
original_clean = helpers.cleanName(tracks['ArtistName'] + " " + tracks['AlbumTitle'] + " " + tracks['TrackTitle']).lower()
original_clean = helpers.cleanName(
tracks['ArtistName'] + " " + tracks['AlbumTitle'] + " " + tracks[
'TrackTitle']).lower()
album = tracks['AlbumTitle']
track_title = tracks['TrackTitle']
if tracks['CleanName'] != original_clean:
myDB.action('UPDATE tracks SET Location=?, BitRate=?, Format=? WHERE CleanName=?', [None, None, None, tracks['CleanName']])
myDB.action('UPDATE alltracks SET Location=?, BitRate=?, Format=? WHERE CleanName=?', [None, None, None, tracks['CleanName']])
myDB.action('UPDATE have SET CleanName=?, Matched="Failed" WHERE ArtistName=? AND AlbumTitle=? AND TrackTitle=?', (original_clean, artist, album, track_title))
myDB.action(
'UPDATE tracks SET Location=?, BitRate=?, Format=? WHERE CleanName=?',
[None, None, None, tracks['CleanName']])
myDB.action(
'UPDATE alltracks SET Location=?, BitRate=?, Format=? WHERE CleanName=?',
[None, None, None, tracks['CleanName']])
myDB.action(
'UPDATE have SET CleanName=?, Matched="Failed" WHERE ArtistName=? AND AlbumTitle=? AND TrackTitle=?',
(original_clean, artist, album, track_title))
update_count += 1
if update_count > 0:
librarysync.update_album_status()
@@ -705,18 +792,29 @@ class WebInterface(object):
elif action == "unmatchAlbum":
artist = existing_artist
album = existing_album
update_clean = myDB.select('SELECT ArtistName, AlbumTitle, TrackTitle, CleanName, Matched from have WHERE ArtistName=? AND AlbumTitle=?', (artist, album))
update_clean = myDB.select(
'SELECT ArtistName, AlbumTitle, TrackTitle, CleanName, Matched from have WHERE ArtistName=? AND AlbumTitle=?',
(artist, album))
update_count = 0
for tracks in update_clean:
original_clean = helpers.cleanName(tracks['ArtistName'] + " " + tracks['AlbumTitle'] + " " + tracks['TrackTitle']).lower()
original_clean = helpers.cleanName(
tracks['ArtistName'] + " " + tracks['AlbumTitle'] + " " + tracks[
'TrackTitle']).lower()
track_title = tracks['TrackTitle']
if tracks['CleanName'] != original_clean:
album_id_check = myDB.action('SELECT AlbumID from tracks WHERE CleanName=?', [tracks['CleanName']]).fetchone()
album_id_check = myDB.action('SELECT AlbumID from tracks WHERE CleanName=?',
[tracks['CleanName']]).fetchone()
if album_id_check:
album_id = album_id_check[0]
myDB.action('UPDATE tracks SET Location=?, BitRate=?, Format=? WHERE CleanName=?', [None, None, None, tracks['CleanName']])
myDB.action('UPDATE alltracks SET Location=?, BitRate=?, Format=? WHERE CleanName=?', [None, None, None, tracks['CleanName']])
myDB.action('UPDATE have SET CleanName=?, Matched="Failed" WHERE ArtistName=? AND AlbumTitle=? AND TrackTitle=?', (original_clean, artist, album, track_title))
myDB.action(
'UPDATE tracks SET Location=?, BitRate=?, Format=? WHERE CleanName=?',
[None, None, None, tracks['CleanName']])
myDB.action(
'UPDATE alltracks SET Location=?, BitRate=?, Format=? WHERE CleanName=?',
[None, None, None, tracks['CleanName']])
myDB.action(
'UPDATE have SET CleanName=?, Matched="Failed" WHERE ArtistName=? AND AlbumTitle=? AND TrackTitle=?',
(original_clean, artist, album, track_title))
update_count += 1
if update_count > 0:
librarysync.update_album_status(album_id)
@@ -802,7 +900,9 @@ class WebInterface(object):
@cherrypy.expose
def forcePostProcess(self, dir=None, album_dir=None, keep_original_folder=False):
from headphones import postprocessor
threading.Thread(target=postprocessor.forcePostProcess, kwargs={'dir': dir, 'album_dir': album_dir, 'keep_original_folder':keep_original_folder == 'True'}).start()
threading.Thread(target=postprocessor.forcePostProcess,
kwargs={'dir': dir, 'album_dir': album_dir,
'keep_original_folder': keep_original_folder == 'True'}).start()
raise cherrypy.HTTPRedirect("home")
@cherrypy.expose
@@ -814,7 +914,8 @@ class WebInterface(object):
@cherrypy.expose
def history(self):
myDB = db.DBConnection()
history = myDB.select('''SELECT * from snatched WHERE Status NOT LIKE "Seed%" order by DateAdded DESC''')
history = myDB.select(
'''SELECT AlbumID, Title, Size, URL, DateAdded, Status, Kind, ifnull(FolderName, '?') FolderName FROM snatched WHERE Status NOT LIKE "Seed%" ORDER BY DateAdded DESC''')
return serve_template(templatename="history.html", title="History", history=history)
@cherrypy.expose
@@ -831,13 +932,14 @@ class WebInterface(object):
def toggleVerbose(self):
headphones.VERBOSE = not headphones.VERBOSE
logger.initLogger(console=not headphones.QUIET,
log_dir=headphones.CONFIG.LOG_DIR, verbose=headphones.VERBOSE)
log_dir=headphones.CONFIG.LOG_DIR, verbose=headphones.VERBOSE)
logger.info("Verbose toggled, set to %s", headphones.VERBOSE)
logger.debug("If you read this message, debug logging is available")
raise cherrypy.HTTPRedirect("logs")
@cherrypy.expose
def getLog(self, iDisplayStart=0, iDisplayLength=100, iSortCol_0=0, sSortDir_0="desc", sSearch="", **kwargs):
def getLog(self, iDisplayStart=0, iDisplayLength=100, iSortCol_0=0, sSortDir_0="desc",
sSearch="", **kwargs):
iDisplayStart = int(iDisplayStart)
iDisplayLength = int(iDisplayLength)
@@ -845,7 +947,8 @@ class WebInterface(object):
if sSearch == "":
filtered = headphones.LOG_LIST[::]
else:
filtered = [row for row in headphones.LOG_LIST for column in row if sSearch.lower() in column.lower()]
filtered = [row for row in headphones.LOG_LIST for column in row if
sSearch.lower() in column.lower()]
sortcolumn = 0
if iSortCol_0 == '1':
@@ -864,7 +967,8 @@ class WebInterface(object):
})
@cherrypy.expose
def getArtists_json(self, iDisplayStart=0, iDisplayLength=100, sSearch="", iSortCol_0='0', sSortDir_0='asc', **kwargs):
def getArtists_json(self, iDisplayStart=0, iDisplayLength=100, sSearch="", iSortCol_0='0',
sSortDir_0='asc', **kwargs):
iDisplayStart = int(iDisplayStart)
iDisplayLength = int(iDisplayLength)
filtered = []
@@ -885,15 +989,18 @@ class WebInterface(object):
filtered = myDB.select(query)
totalcount = len(filtered)
else:
query = 'SELECT * from artists WHERE ArtistSortName LIKE "%' + sSearch + '%" OR LatestAlbum LIKE "%' + sSearch + '%"' + 'ORDER BY %s COLLATE NOCASE %s' % (sortcolumn, sSortDir_0)
query = 'SELECT * from artists WHERE ArtistSortName LIKE "%' + sSearch + '%" OR LatestAlbum LIKE "%' + sSearch + '%"' + 'ORDER BY %s COLLATE NOCASE %s' % (
sortcolumn, sSortDir_0)
filtered = myDB.select(query)
totalcount = myDB.select('SELECT COUNT(*) from artists')[0][0]
if sortbyhavepercent:
filtered.sort(key=lambda x: (float(x['HaveTracks']) / x['TotalTracks'] if x['TotalTracks'] > 0 else 0.0, x['HaveTracks'] if x['HaveTracks'] else 0.0), reverse=sSortDir_0 == "asc")
filtered.sort(key=lambda x: (
float(x['HaveTracks']) / x['TotalTracks'] if x['TotalTracks'] > 0 else 0.0,
x['HaveTracks'] if x['HaveTracks'] else 0.0), reverse=sSortDir_0 == "asc")
#can't figure out how to change the datatables default sorting order when its using an ajax datasource so ill
#just reverse it here and the first click on the "Latest Album" header will sort by descending release date
# can't figure out how to change the datatables default sorting order when its using an ajax datasource so ill
# just reverse it here and the first click on the "Latest Album" header will sort by descending release date
if sortcolumn == 'ReleaseDate':
filtered.reverse()
@@ -901,16 +1008,16 @@ class WebInterface(object):
rows = []
for artist in artists:
row = {"ArtistID": artist['ArtistID'],
"ArtistName": artist["ArtistName"],
"ArtistSortName": artist["ArtistSortName"],
"Status": artist["Status"],
"TotalTracks": artist["TotalTracks"],
"HaveTracks": artist["HaveTracks"],
"LatestAlbum": "",
"ReleaseDate": "",
"ReleaseInFuture": "False",
"AlbumID": "",
}
"ArtistName": artist["ArtistName"],
"ArtistSortName": artist["ArtistSortName"],
"Status": artist["Status"],
"TotalTracks": artist["TotalTracks"],
"HaveTracks": artist["HaveTracks"],
"LatestAlbum": "",
"ReleaseDate": "",
"ReleaseInFuture": "False",
"AlbumID": "",
}
if not row['HaveTracks']:
row['HaveTracks'] = 0
@@ -954,9 +1061,9 @@ class WebInterface(object):
myDB = db.DBConnection()
artist = myDB.action('SELECT * FROM artists WHERE ArtistID=?', [ArtistID]).fetchone()
artist_json = json.dumps({
'ArtistName': artist['ArtistName'],
'Status': artist['Status']
})
'ArtistName': artist['ArtistName'],
'Status': artist['Status']
})
return artist_json
@cherrypy.expose
@@ -964,9 +1071,9 @@ class WebInterface(object):
myDB = db.DBConnection()
album = myDB.action('SELECT * from albums WHERE AlbumID=?', [AlbumID]).fetchone()
album_json = json.dumps({
'AlbumTitle': album['AlbumTitle'],
'ArtistName': album['ArtistName'],
'Status': album['Status']
'AlbumTitle': album['AlbumTitle'],
'ArtistName': album['ArtistName'],
'Status': album['Status']
})
return album_json
@@ -982,7 +1089,9 @@ class WebInterface(object):
myDB.action('DELETE from snatched WHERE Status=?', [type])
else:
logger.info(u"Deleting '%s' from history" % title)
myDB.action('DELETE from snatched WHERE Status NOT LIKE "Seed%" AND Title=? AND DateAdded=?', [title, date_added])
myDB.action(
'DELETE from snatched WHERE Status NOT LIKE "Seed%" AND Title=? AND DateAdded=?',
[title, date_added])
raise cherrypy.HTTPRedirect("history")
@cherrypy.expose
@@ -995,7 +1104,7 @@ class WebInterface(object):
def forceScan(self, keepmatched=None):
myDB = db.DBConnection()
#########################################
#NEED TO MOVE THIS INTO A SEPARATE FUNCTION BEFORE RELEASE
# NEED TO MOVE THIS INTO A SEPARATE FUNCTION BEFORE RELEASE
myDB.select('DELETE from Have')
logger.info('Removed all entries in local library database')
myDB.select('UPDATE alltracks SET Location=NULL, BitRate=NULL, Format=NULL')
@@ -1003,7 +1112,8 @@ class WebInterface(object):
logger.info('All tracks in library unmatched')
myDB.action('UPDATE artists SET HaveTracks=NULL')
logger.info('Reset track counts for all artists')
myDB.action('UPDATE albums SET Status="Skipped" WHERE Status="Skipped" OR Status="Downloaded"')
myDB.action(
'UPDATE albums SET Status="Skipped" WHERE Status="Skipped" OR Status="Downloaded"')
logger.info('Marking all unwanted albums as Skipped')
try:
threading.Thread(target=librarysync.libraryScan).start()
@@ -1014,7 +1124,8 @@ class WebInterface(object):
@cherrypy.expose
def config(self):
interface_dir = os.path.join(headphones.PROG_DIR, 'data/interfaces/')
interface_list = [name for name in os.listdir(interface_dir) if os.path.isdir(os.path.join(interface_dir, name))]
interface_list = [name for name in os.listdir(interface_dir) if
os.path.isdir(os.path.join(interface_dir, name))]
config = {
"http_host": headphones.CONFIG.HTTP_HOST,
@@ -1115,7 +1226,8 @@ class WebInterface(object):
"preferred_bitrate": headphones.CONFIG.PREFERRED_BITRATE,
"preferred_bitrate_high": headphones.CONFIG.PREFERRED_BITRATE_HIGH_BUFFER,
"preferred_bitrate_low": headphones.CONFIG.PREFERRED_BITRATE_LOW_BUFFER,
"preferred_bitrate_allow_lossless": checked(headphones.CONFIG.PREFERRED_BITRATE_ALLOW_LOSSLESS),
"preferred_bitrate_allow_lossless": checked(
headphones.CONFIG.PREFERRED_BITRATE_ALLOW_LOSSLESS),
"detect_bitrate": checked(headphones.CONFIG.DETECT_BITRATE),
"lossless_bitrate_from": headphones.CONFIG.LOSSLESS_BITRATE_FROM,
"lossless_bitrate_to": headphones.CONFIG.LOSSLESS_BITRATE_TO,
@@ -1133,7 +1245,7 @@ class WebInterface(object):
"embed_album_art": checked(headphones.CONFIG.EMBED_ALBUM_ART),
"embed_lyrics": checked(headphones.CONFIG.EMBED_LYRICS),
"replace_existing_folders": checked(headphones.CONFIG.REPLACE_EXISTING_FOLDERS),
"keep_original_folder" : checked(headphones.CONFIG.KEEP_ORIGINAL_FOLDER),
"keep_original_folder": checked(headphones.CONFIG.KEEP_ORIGINAL_FOLDER),
"destination_dir": headphones.CONFIG.DESTINATION_DIR,
"lossless_destination_dir": headphones.CONFIG.LOSSLESS_DESTINATION_DIR,
"folder_format": headphones.CONFIG.FOLDER_FORMAT,
@@ -1153,6 +1265,7 @@ class WebInterface(object):
"magnet_links_0": radio(headphones.CONFIG.MAGNET_LINKS, 0),
"magnet_links_1": radio(headphones.CONFIG.MAGNET_LINKS, 1),
"magnet_links_2": radio(headphones.CONFIG.MAGNET_LINKS, 2),
"magnet_links_3": radio(headphones.CONFIG.MAGNET_LINKS, 3),
"log_dir": headphones.CONFIG.LOG_DIR,
"cache_dir": headphones.CONFIG.CACHE_DIR,
"interface_list": interface_list,
@@ -1286,18 +1399,31 @@ class WebInterface(object):
# Handle the variable config options. Note - keys with False values aren't getting passed
checked_configs = [
"launch_browser", "enable_https", "api_enabled", "use_blackhole", "headphones_indexer", "use_newznab", "newznab_enabled", "use_torznab", "torznab_enabled",
"use_nzbsorg", "use_omgwtfnzbs", "use_kat", "use_piratebay", "use_oldpiratebay", "use_mininova", "use_waffles", "use_rutracker",
"use_whatcd", "use_strike", "preferred_bitrate_allow_lossless", "detect_bitrate", "ignore_clean_releases", "freeze_db", "cue_split", "move_files",
"rename_files", "correct_metadata", "cleanup_files", "keep_nfo", "add_album_art", "embed_album_art", "embed_lyrics",
"replace_existing_folders", "keep_original_folder", "file_underscores", "include_extras", "official_releases_only",
"wait_until_release_date", "autowant_upcoming", "autowant_all", "autowant_manually_added", "do_not_process_unmatched", "keep_torrent_files", "music_encoder",
"encoderlossless", "encoder_multicore", "delete_lossless_files", "growl_enabled", "growl_onsnatch", "prowl_enabled",
"prowl_onsnatch", "xbmc_enabled", "xbmc_update", "xbmc_notify", "lms_enabled", "plex_enabled", "plex_update", "plex_notify",
"nma_enabled", "nma_onsnatch", "pushalot_enabled", "pushalot_onsnatch", "synoindex_enabled", "pushover_enabled",
"pushover_onsnatch", "pushbullet_enabled", "pushbullet_onsnatch", "subsonic_enabled", "twitter_enabled", "twitter_onsnatch",
"osx_notify_enabled", "osx_notify_onsnatch", "boxcar_enabled", "boxcar_onsnatch", "songkick_enabled", "songkick_filter_enabled",
"mpc_enabled", "email_enabled", "email_ssl", "email_tls", "email_onsnatch", "customauth", "idtag"
"launch_browser", "enable_https", "api_enabled", "use_blackhole", "headphones_indexer",
"use_newznab", "newznab_enabled", "use_torznab", "torznab_enabled",
"use_nzbsorg", "use_omgwtfnzbs", "use_kat", "use_piratebay", "use_oldpiratebay",
"use_mininova", "use_waffles", "use_rutracker",
"use_whatcd", "use_strike", "preferred_bitrate_allow_lossless", "detect_bitrate",
"ignore_clean_releases", "freeze_db", "cue_split", "move_files",
"rename_files", "correct_metadata", "cleanup_files", "keep_nfo", "add_album_art",
"embed_album_art", "embed_lyrics",
"replace_existing_folders", "keep_original_folder", "file_underscores",
"include_extras", "official_releases_only",
"wait_until_release_date", "autowant_upcoming", "autowant_all",
"autowant_manually_added", "do_not_process_unmatched", "keep_torrent_files",
"music_encoder",
"encoderlossless", "encoder_multicore", "delete_lossless_files", "growl_enabled",
"growl_onsnatch", "prowl_enabled",
"prowl_onsnatch", "xbmc_enabled", "xbmc_update", "xbmc_notify", "lms_enabled",
"plex_enabled", "plex_update", "plex_notify",
"nma_enabled", "nma_onsnatch", "pushalot_enabled", "pushalot_onsnatch",
"synoindex_enabled", "pushover_enabled",
"pushover_onsnatch", "pushbullet_enabled", "pushbullet_onsnatch", "subsonic_enabled",
"twitter_enabled", "twitter_onsnatch",
"osx_notify_enabled", "osx_notify_onsnatch", "boxcar_enabled", "boxcar_onsnatch",
"songkick_enabled", "songkick_filter_enabled",
"mpc_enabled", "email_enabled", "email_ssl", "email_tls", "email_onsnatch",
"customauth", "idtag"
]
for checked_config in checked_configs:
if checked_config not in kwargs:
@@ -1309,6 +1435,12 @@ class WebInterface(object):
kwargs[plain_config] = kwargs[use_config]
del kwargs[use_config]
# Check if encoderoutputformat is set multiple times
if len(kwargs['encoderoutputformat'][-1]) > 1:
kwargs['encoderoutputformat'] = kwargs['encoderoutputformat'][-1]
else:
kwargs['encoderoutputformat'] = kwargs['encoderoutputformat'][0]
extra_newznabs = []
for kwarg in [x for x in kwargs if x.startswith('newznab_host')]:
newznab_host_key = kwarg
@@ -1473,9 +1605,11 @@ class WebInterface(object):
image_dict = {'artwork': image_url, 'thumbnail': thumb_url}
elif AlbumID and (not image_dict['artwork'] or not image_dict['thumbnail']):
if not image_dict['artwork']:
image_dict['artwork'] = "http://coverartarchive.org/release/%s/front-500.jpg" % AlbumID
image_dict[
'artwork'] = "http://coverartarchive.org/release/%s/front-500.jpg" % AlbumID
if not image_dict['thumbnail']:
image_dict['thumbnail'] = "http://coverartarchive.org/release/%s/front-250.jpg" % AlbumID
image_dict[
'thumbnail'] = "http://coverartarchive.org/release/%s/front-250.jpg" % AlbumID
return json.dumps(image_dict)
@@ -1514,7 +1648,8 @@ class WebInterface(object):
if result:
osx_notify = notifiers.OSX_NOTIFY()
osx_notify.notify('Registered', result, 'Success :-)')
logger.info('Registered %s, to re-register a different app, delete this app first' % result)
logger.info(
'Registered %s, to re-register a different app, delete this app first' % result)
else:
logger.warn(msg)
return msg
@@ -1536,7 +1671,8 @@ class WebInterface(object):
def testPushbullet(self):
logger.info("Testing Pushbullet notifications")
pushbullet = notifiers.PUSHBULLET()
pushbullet.notify("it works!")
pushbullet.notify("it works!", "Test message")
class Artwork(object):
@cherrypy.expose
@@ -1605,4 +1741,6 @@ class Artwork(object):
return fp.read()
thumbs = Thumbs()
WebInterface.artwork = Artwork()

View File

@@ -13,18 +13,17 @@
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
import os
import sys
import os
import cherrypy
import headphones
from headphones import logger
from headphones.webserve import WebInterface
from headphones.helpers import create_https_certificates
def initialize(options):
# HTTPS stuff stolen from sickbeard
enable_https = options['enable_https']
https_cert = options['https_cert']
@@ -33,15 +32,16 @@ def initialize(options):
if enable_https:
# If either the HTTPS certificate or key do not exist, try to make
# self-signed ones.
if not (https_cert and os.path.exists(https_cert)) or not (https_key and os.path.exists(https_key)):
if not (https_cert and os.path.exists(https_cert)) or not (
https_key and os.path.exists(https_key)):
if not create_https_certificates(https_cert, https_key):
logger.warn("Unable to create certificate and key. Disabling " \
"HTTPS")
"HTTPS")
enable_https = False
if not (os.path.exists(https_cert) and os.path.exists(https_key)):
logger.warn("Disabled HTTPS because of missing certificate and " \
"key.")
"key.")
enable_https = False
options_dict = {
@@ -63,7 +63,7 @@ def initialize(options):
protocol = "http"
logger.info("Starting Headphones web server on %s://%s:%d/", protocol,
options['http_host'], options['http_port'])
options['http_host'], options['http_port'])
cherrypy.config.update(options_dict)
conf = {
@@ -99,7 +99,8 @@ def initialize(options):
}
if options['http_password']:
logger.info("Web server authentication is enabled, username is '%s'", options['http_username'])
logger.info("Web server authentication is enabled, username is '%s'",
options['http_username'])
conf['/'].update({
'tools.auth_basic.on': True,
@@ -118,7 +119,8 @@ def initialize(options):
cherrypy.process.servers.check_port(str(options['http_host']), options['http_port'])
cherrypy.server.start()
except IOError:
sys.stderr.write('Failed to start on port: %i. Is something else running?\n' % (options['http_port']))
sys.stderr.write(
'Failed to start on port: %i. Is something else running?\n' % (options['http_port']))
sys.exit(1)
cherrypy.server.wait()

View File

@@ -27,17 +27,17 @@ rcvar=${name}_enable
load_rc_config ${name}
: ${headphones_enable:="NO"}
: ${headphones_user:="_sabnzbd"}
: ${headphones_dir:="/usr/local/headphones"}
: ${headphones_chdir:="${headphones_dir}"}
: ${headphones_pid:="${headphones_dir}/headphones.pid"}
: ${headphones_conf:="${headphones_dir}/config.ini"}
: "${headphones_enable:="NO"}"
: "${headphones_user:="_sabnzbd"}"
: "${headphones_dir:="/usr/local/headphones"}"
: "${headphones_chdir:="${headphones_dir}"}"
: "${headphones_pid:="${headphones_dir}/headphones.pid"}"
: "${headphones_conf:="${headphones_dir}/config.ini"}"
WGET="/usr/local/bin/wget" # You need wget for this script to safely shutdown Headphones.
if [ -e "${headphones_conf}" ]; then
HOST=`grep -A64 "\[General\]" "${headphones_conf}"|egrep "^http_host"|perl -wple 's/^http_host = (.*)$/$1/'`
PORT=`grep -A64 "\[General\]" "${headphones_conf}"|egrep "^http_port"|perl -wple 's/^http_port = (.*)$/$1/'`
HOST=$(grep -A64 "\[General\]" "${headphones_conf}"|egrep "^http_host"|perl -wple 's/^http_host = (.*)$/$1/')
PORT=$(grep -A64 "\[General\]" "${headphones_conf}"|egrep "^http_port"|perl -wple 's/^http_port = (.*)$/$1/')
fi
status_cmd="${name}_status"
@@ -53,15 +53,15 @@ if [ ! -x "${WGET}" ]; then
fi
# Ensure user is root when running this script.
if [ `id -u` != "0" ]; then
if [ "$(id -u)" != "0" ]; then
echo "Oops, you should be root before running this!"
exit 1
fi
verify_headphones_pid() {
# Make sure the pid corresponds to the Headphones process.
pid=`cat ${headphones_pid} 2>/dev/null`
ps -p ${pid} | grep -q "python ${headphones_dir}/Headphones.py"
pid=$(cat "${headphones_pid}" 2>/dev/null)
pgrep -F "${headphones_pid}" -q "python ${headphones_dir}/Headphones.py"
return $?
}
@@ -73,10 +73,10 @@ headphones_stop() {
fi
echo "Stopping $name"
verify_headphones_pid
${WGET} -O - -q --user=${SBUSR} --password=${SBPWD} "http://${HOST}:${PORT}/shutdown/" >/dev/null
${WGET} -O - -q --user="${SBUSR}" --password="${SBPWD}" "http://${HOST}:${PORT}/shutdown/" >/dev/null
if [ -n "${pid}" ]; then
wait_for_pids ${pid}
wait_for_pids "${pid}"
echo "Stopped $name"
fi
}

View File

@@ -28,11 +28,11 @@ rcvar=${name}_enable
load_rc_config ${name}
: ${headphones_enable:="NO"}
: ${headphones_user:="_sabnzbd"}
: ${headphones_dir:="/usr/local/headphones"}
: ${headphones_chdir:="${headphones_dir}"}
: ${headphones_pid:="${headphones_dir}/headphones.pid"}
: "${headphones_enable:="NO"}"
: "${headphones_user:="_sabnzbd"}"
: "${headphones_dir:="/usr/local/headphones"}"
: "${headphones_chdir:="${headphones_dir}"}"
: "${headphones_pid:="${headphones_dir}/headphones.pid"}"
status_cmd="${name}_status"
stop_cmd="${name}_stop"
@@ -41,15 +41,15 @@ command="/usr/sbin/daemon"
command_args="-f -p ${headphones_pid} python ${headphones_dir}/Headphones.py ${headphones_flags} --quiet --nolaunch"
# Ensure user is root when running this script.
if [ `id -u` != "0" ]; then
if [ "$(id -u)" != "0" ]; then
echo "Oops, you should be root before running this!"
exit 1
fi
verify_headphones_pid() {
# Make sure the pid corresponds to the Headphones process.
pid=`cat ${headphones_pid} 2>/dev/null`
ps -p ${pid} | grep -q "python ${headphones_dir}/Headphones.py"
pid=$(cat "${headphones_pid}" 2>/dev/null)
pgrep -F "${headphones_pid}" -q "python ${headphones_dir}/Headphones.py"
return $?
}
@@ -58,7 +58,7 @@ headphones_stop() {
echo "Stopping $name"
verify_headphones_pid
if [ -n "${pid}" ]; then
wait_for_pids ${pid}
wait_for_pids "${pid}"
echo "Stopped"
fi
}

View File

@@ -32,7 +32,6 @@
## HP_PIDFILE= #$PID_FILE, the location of headphones.pid, the default is /var/run/headphones/headphones.pid
## PYTHON_BIN= #$DAEMON, the location of the python binary, the default is /usr/bin/python
## HP_OPTS= #$EXTRA_DAEMON_OPTS, extra cli option for headphones, i.e. " --config=/home/headphones/config.ini"
## SSD_OPTS= #$EXTRA_SSD_OPTS, extra start-stop-daemon option like " --group=users"
## HP_PORT= #$PORT_OPTS, hardcoded port for the webserver, overrides value in config.ini
##
## EXAMPLE if want to run as different user
@@ -101,9 +100,6 @@ load_settings() {
# Extra daemon option like: HP_OPTS=" --config=/home/headphones/config.ini"
EXTRA_DAEMON_OPTS=${HP_OPTS-}
# Extra start-stop-daemon option like START_OPTS=" --group=users"
EXTRA_SSD_OPTS=${SSD_OPTS-}
# Hardcoded port to run on, overrides config.ini settings
[ -n "$HP_PORT" ] && {
PORT_OPTS=" --port=${HP_PORT} "
@@ -114,7 +110,7 @@ load_settings() {
SETTINGS_LOADED=TRUE
fi
[ -x $DAEMON ] || {
[ -x "$DAEMON" ] || {
log_warning_msg "$DESC: Can't execute daemon, aborting. See $DAEMON";
return 1;}
@@ -125,8 +121,8 @@ load_settings || exit 0
is_running () {
# returns 1 when running, else 0.
if [ -e $PID_FILE ]; then
PID=`cat $PID_FILE`
if [ -e "$PID_FILE" ]; then
PID=$(cat "$PID_FILE")
RET=$?
[ $RET -gt 1 ] && exit 1 || return $RET
@@ -136,28 +132,28 @@ is_running () {
}
handle_pid () {
PID_PATH=`dirname $PID_FILE`
[ -d $PID_PATH ] || mkdir -p $PID_PATH && chown -R $RUN_AS $PID_PATH > /dev/null || {
PID_PATH=$(dirname "$PID_FILE")
[ -d "$PID_PATH" ] || mkdir -p "$PID_PATH" && chown -R "$RUN_AS" "$PID_PATH" > /dev/null || {
log_warning_msg "$DESC: Could not create $PID_FILE, See $SETTINGS, aborting.";
return 1;}
if [ -e $PID_FILE ]; then
PID=`cat $PID_FILE`
if ! kill -0 $PID > /dev/null 2>&1; then
if [ -e "$PID_FILE" ]; then
PID=$(cat "$PID_FILE")
if ! kill -0 "$PID" > /dev/null 2>&1; then
log_warning_msg "Removing stale $PID_FILE"
rm $PID_FILE
rm "$PID_FILE"
fi
fi
}
handle_datadir () {
[ -d $DATA_DIR ] || mkdir -p $DATA_DIR && chown -R $RUN_AS $DATA_DIR > /dev/null || {
[ -d "$DATA_DIR" ] || mkdir -p "$DATA_DIR" && chown -R "$RUN_AS" "$DATA_DIR" > /dev/null || {
log_warning_msg "$DESC: Could not create $DATA_DIR, See $SETTINGS, aborting.";
return 1;}
}
handle_updates () {
chown -R $RUN_AS $APP_PATH > /dev/null || {
chown -R "$RUN_AS" "$APP_PATH" > /dev/null || {
log_warning_msg "$DESC: $APP_PATH not writable by $RUN_AS for web-updates";
return 0; }
}
@@ -168,7 +164,7 @@ start_headphones () {
handle_updates
if ! is_running; then
log_daemon_msg "Starting $DESC"
start-stop-daemon -o -d $APP_PATH -c $RUN_AS --start $EXTRA_SSD_OPTS --pidfile $PID_FILE --exec $DAEMON -- $DAEMON_OPTS
start-stop-daemon -o -d "$APP_PATH" -c "$RUN_AS" --start "$EXTRA_SSD"_OPTS --pidfile "$PID_FILE" --exec "$DAEMON" -- "$DAEMON_OPTS"
check_retval
else
log_success_msg "$DESC: already running (pid $PID)"
@@ -178,7 +174,7 @@ start_headphones () {
stop_headphones () {
if is_running; then
log_daemon_msg "Stopping $DESC"
start-stop-daemon -o --stop --pidfile $PID_FILE --retry 15
start-stop-daemon -o --stop --pidfile "$PID_FILE" --retry 15
check_retval
else
log_success_msg "$DESC: not running"

View File

@@ -1,13 +0,0 @@
Copyright 2014 Kenneth Reitz
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,54 +0,0 @@
Requests includes some vendorized python libraries to ease installation.
Urllib3 License
===============
This is the MIT license: http://www.opensource.org/licenses/mit-license.php
Copyright 2008-2011 Andrey Petrov and contributors (see CONTRIBUTORS.txt),
Modifications copyright 2012 Kenneth Reitz.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Chardet License
===============
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
02110-1301 USA
CA Bundle License
=================
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.

View File

@@ -1,85 +0,0 @@
Requests: HTTP for Humans
=========================
.. image:: https://badge.fury.io/py/requests.png
:target: http://badge.fury.io/py/requests
.. image:: https://pypip.in/d/requests/badge.png
:target: https://crate.io/packages/requests/
Requests is an Apache2 Licensed HTTP library, written in Python, for human
beings.
Most existing Python modules for sending HTTP requests are extremely
verbose and cumbersome. Python's builtin urllib2 module provides most of
the HTTP capabilities you should need, but the api is thoroughly broken.
It requires an enormous amount of work (even method overrides) to
perform the simplest of tasks.
Things shouldn't be this way. Not in Python.
.. code-block:: python
>>> r = requests.get('https://api.github.com', auth=('user', 'pass'))
>>> r.status_code
204
>>> r.headers['content-type']
'application/json'
>>> r.text
...
See `the same code, without Requests <https://gist.github.com/973705>`_.
Requests allow you to send HTTP/1.1 requests. You can add headers, form data,
multipart files, and parameters with simple Python dictionaries, and access the
response data in the same way. It's powered by httplib and `urllib3
<https://github.com/shazow/urllib3>`_, but it does all the hard work and crazy
hacks for you.
Features
--------
- International Domains and URLs
- Keep-Alive & Connection Pooling
- Sessions with Cookie Persistence
- Browser-style SSL Verification
- Basic/Digest Authentication
- Elegant Key/Value Cookies
- Automatic Decompression
- Unicode Response Bodies
- Multipart File Uploads
- Connection Timeouts
- Thread-safety
- HTTP(S) proxy support
Installation
------------
To install Requests, simply:
.. code-block:: bash
$ pip install requests
Documentation
-------------
Documentation is available at http://docs.python-requests.org/.
Contribute
----------
#. Check for open issues or open a fresh issue to start a discussion around a feature idea or a bug. There is a `Contributor Friendly`_ tag for issues that should be ideal for people who are not very familiar with the codebase yet.
#. If you feel uncomfortable or uncertain about an issue or your changes, feel free to email @sigmavirus24 and he will happily help you via email, Skype, remote pairing or whatever you are comfortable with.
#. Fork `the repository`_ on GitHub to start making your changes to the **master** branch (or branch off of it).
#. Write a test which shows that the bug was fixed or that the feature works as expected.
#. Send a pull request and bug the maintainer until it gets merged and published. :) Make sure to add yourself to AUTHORS_.
.. _`the repository`: http://github.com/kennethreitz/requests
.. _AUTHORS: https://github.com/kennethreitz/requests/blob/master/AUTHORS.rst
.. _Contributor Friendly: https://github.com/kennethreitz/requests/issues?direction=desc&labels=Contributor+Friendly&page=1&sort=updated&state=open

View File

@@ -6,7 +6,7 @@
# /
"""
requests HTTP library
Requests HTTP library
~~~~~~~~~~~~~~~~~~~~~
Requests is an HTTP library, written in Python, for human beings. Basic GET
@@ -36,17 +36,17 @@ usage:
The other HTTP methods are supported - see `requests.api`. Full documentation
is at <http://python-requests.org>.
:copyright: (c) 2014 by Kenneth Reitz.
:copyright: (c) 2015 by Kenneth Reitz.
:license: Apache 2.0, see LICENSE for more details.
"""
__title__ = 'requests'
__version__ = '2.5.1'
__build__ = 0x020501
__version__ = '2.7.0'
__build__ = 0x020700
__author__ = 'Kenneth Reitz'
__license__ = 'Apache 2.0'
__copyright__ = 'Copyright 2014 Kenneth Reitz'
__copyright__ = 'Copyright 2015 Kenneth Reitz'
# Attempt to enable urllib3's SNI support, if possible
try:

View File

@@ -11,13 +11,14 @@ and maintain connections.
import socket
from .models import Response
from .packages.urllib3 import Retry
from .packages.urllib3.poolmanager import PoolManager, proxy_from_url
from .packages.urllib3.response import HTTPResponse
from .packages.urllib3.util import Timeout as TimeoutSauce
from .packages.urllib3.util.retry import Retry
from .compat import urlparse, basestring
from .utils import (DEFAULT_CA_BUNDLE_PATH, get_encoding_from_headers,
prepend_scheme_if_needed, get_auth_from_url, urldefragauth)
prepend_scheme_if_needed, get_auth_from_url, urldefragauth,
select_proxy)
from .structures import CaseInsensitiveDict
from .packages.urllib3.exceptions import ConnectTimeoutError
from .packages.urllib3.exceptions import HTTPError as _HTTPError
@@ -35,6 +36,7 @@ from .auth import _basic_auth_str
DEFAULT_POOLBLOCK = False
DEFAULT_POOLSIZE = 10
DEFAULT_RETRIES = 0
DEFAULT_POOL_TIMEOUT = None
class BaseAdapter(object):
@@ -237,8 +239,7 @@ class HTTPAdapter(BaseAdapter):
:param url: The URL to connect to.
:param proxies: (optional) A Requests-style dictionary of proxies used on this request.
"""
proxies = proxies or {}
proxy = proxies.get(urlparse(url.lower()).scheme)
proxy = select_proxy(url, proxies)
if proxy:
proxy = prepend_scheme_if_needed(proxy, 'http')
@@ -271,12 +272,10 @@ class HTTPAdapter(BaseAdapter):
:class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param proxies: A dictionary of schemes to proxy URLs.
:param proxies: A dictionary of schemes or schemes and hosts to proxy URLs.
"""
proxies = proxies or {}
proxy = select_proxy(request.url, proxies)
scheme = urlparse(request.url).scheme
proxy = proxies.get(scheme)
if proxy and scheme != 'https':
url = urldefragauth(request.url)
else:
@@ -309,7 +308,6 @@ class HTTPAdapter(BaseAdapter):
:class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
:param proxies: The url of the proxy being used for this request.
:param kwargs: Optional additional keyword arguments.
"""
headers = {}
username, password = get_auth_from_url(proxy)
@@ -326,8 +324,8 @@ class HTTPAdapter(BaseAdapter):
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a (`connect timeout, read
timeout <user/advanced.html#timeouts>`_) tuple.
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
@@ -375,7 +373,7 @@ class HTTPAdapter(BaseAdapter):
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=timeout)
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
@@ -407,9 +405,6 @@ class HTTPAdapter(BaseAdapter):
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
else:
# All is well, return the connection to the pool.
conn._put_conn(low_conn)
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)

View File

@@ -16,7 +16,6 @@ from . import sessions
def request(method, url, **kwargs):
"""Constructs and sends a :class:`Request <Request>`.
Returns :class:`Response <Response>` object.
:param method: method for the new :class:`Request` object.
:param url: URL for the new :class:`Request` object.
@@ -28,8 +27,8 @@ def request(method, url, **kwargs):
:param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': ('filename', fileobj)}``) for multipart encoding upload.
:param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
:param timeout: (optional) How long to wait for the server to send data
before giving up, as a float, or a (`connect timeout, read timeout
<user/advanced.html#timeouts>`_) tuple.
before giving up, as a float, or a :ref:`(connect timeout, read
timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param allow_redirects: (optional) Boolean. Set to True if POST/PUT/DELETE redirect following is allowed.
:type allow_redirects: bool
@@ -37,6 +36,8 @@ def request(method, url, **kwargs):
:param verify: (optional) if ``True``, the SSL cert will be verified. A CA_BUNDLE path can also be provided.
:param stream: (optional) if ``False``, the response content will be immediately downloaded.
:param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
:return: :class:`Response <Response>` object
:rtype: requests.Response
Usage::
@@ -54,22 +55,27 @@ def request(method, url, **kwargs):
return response
def get(url, **kwargs):
"""Sends a GET request. Returns :class:`Response` object.
def get(url, params=None, **kwargs):
"""Sends a GET request.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
kwargs.setdefault('allow_redirects', True)
return request('get', url, **kwargs)
return request('get', url, params=params, **kwargs)
def options(url, **kwargs):
"""Sends a OPTIONS request. Returns :class:`Response` object.
"""Sends a OPTIONS request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
kwargs.setdefault('allow_redirects', True)
@@ -77,10 +83,12 @@ def options(url, **kwargs):
def head(url, **kwargs):
"""Sends a HEAD request. Returns :class:`Response` object.
"""Sends a HEAD request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
kwargs.setdefault('allow_redirects', False)
@@ -88,44 +96,52 @@ def head(url, **kwargs):
def post(url, data=None, json=None, **kwargs):
"""Sends a POST request. Returns :class:`Response` object.
"""Sends a POST request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request('post', url, data=data, json=json, **kwargs)
def put(url, data=None, **kwargs):
"""Sends a PUT request. Returns :class:`Response` object.
"""Sends a PUT request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request('put', url, data=data, **kwargs)
def patch(url, data=None, **kwargs):
"""Sends a PATCH request. Returns :class:`Response` object.
"""Sends a PATCH request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request('patch', url, data=data, **kwargs)
def delete(url, **kwargs):
"""Sends a DELETE request. Returns :class:`Response` object.
"""Sends a DELETE request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request('delete', url, **kwargs)

View File

@@ -103,7 +103,8 @@ class HTTPDigestAuth(AuthBase):
# XXX not implemented yet
entdig = None
p_parsed = urlparse(url)
path = p_parsed.path
#: path is request-uri defined in RFC 2616 which should not be empty
path = p_parsed.path or "/"
if p_parsed.query:
path += '?' + p_parsed.query
@@ -124,13 +125,15 @@ class HTTPDigestAuth(AuthBase):
s += os.urandom(8)
cnonce = (hashlib.sha1(s).hexdigest()[:16])
noncebit = "%s:%s:%s:%s:%s" % (nonce, ncvalue, cnonce, qop, HA2)
if _algorithm == 'MD5-SESS':
HA1 = hash_utf8('%s:%s:%s' % (HA1, nonce, cnonce))
if qop is None:
respdig = KD(HA1, "%s:%s" % (nonce, HA2))
elif qop == 'auth' or 'auth' in qop.split(','):
noncebit = "%s:%s:%s:%s:%s" % (
nonce, ncvalue, cnonce, 'auth', HA2
)
respdig = KD(HA1, noncebit)
else:
# XXX handle auth-int.
@@ -176,7 +179,7 @@ class HTTPDigestAuth(AuthBase):
# Consume content and release the original connection
# to allow our new request to reuse the same one.
r.content
r.raw.release_conn()
r.close()
prep = r.request.copy()
extract_cookies_to_jar(prep._cookies, r.request, r.raw)
prep.prepare_cookies(prep._cookies)

View File

@@ -21,58 +21,6 @@ is_py2 = (_ver[0] == 2)
#: Python 3.x?
is_py3 = (_ver[0] == 3)
#: Python 3.0.x
is_py30 = (is_py3 and _ver[1] == 0)
#: Python 3.1.x
is_py31 = (is_py3 and _ver[1] == 1)
#: Python 3.2.x
is_py32 = (is_py3 and _ver[1] == 2)
#: Python 3.3.x
is_py33 = (is_py3 and _ver[1] == 3)
#: Python 3.4.x
is_py34 = (is_py3 and _ver[1] == 4)
#: Python 2.7.x
is_py27 = (is_py2 and _ver[1] == 7)
#: Python 2.6.x
is_py26 = (is_py2 and _ver[1] == 6)
#: Python 2.5.x
is_py25 = (is_py2 and _ver[1] == 5)
#: Python 2.4.x
is_py24 = (is_py2 and _ver[1] == 4) # I'm assuming this is not by choice.
# ---------
# Platforms
# ---------
# Syntax sugar.
_ver = sys.version.lower()
is_pypy = ('pypy' in _ver)
is_jython = ('jython' in _ver)
is_ironpython = ('iron' in _ver)
# Assume CPython, if nothing else.
is_cpython = not any((is_pypy, is_jython, is_ironpython))
# Windows-based system.
is_windows = 'win32' in str(sys.platform).lower()
# Standard Linux 2+ system.
is_linux = ('linux' in str(sys.platform).lower())
is_osx = ('darwin' in str(sys.platform).lower())
is_hpux = ('hpux' in str(sys.platform).lower()) # Complete guess.
is_solaris = ('solar==' in str(sys.platform).lower()) # Complete guess.
try:
import simplejson as json
except (ImportError, SyntaxError):
@@ -99,7 +47,6 @@ if is_py2:
basestring = basestring
numeric_types = (int, long, float)
elif is_py3:
from urllib.parse import urlparse, urlunparse, urljoin, urlsplit, urlencode, quote, unquote, quote_plus, unquote_plus, urldefrag
from urllib.request import parse_http_list, getproxies, proxy_bypass

View File

@@ -6,6 +6,7 @@ Compatibility code to be able to use `cookielib.CookieJar` with requests.
requests.utils imports from here, so be careful with imports.
"""
import copy
import time
import collections
from .compat import cookielib, urlparse, urlunparse, Morsel
@@ -157,26 +158,28 @@ class CookieConflictError(RuntimeError):
class RequestsCookieJar(cookielib.CookieJar, collections.MutableMapping):
"""Compatibility class; is a cookielib.CookieJar, but exposes a dict interface.
"""Compatibility class; is a cookielib.CookieJar, but exposes a dict
interface.
This is the CookieJar we create by default for requests and sessions that
don't specify one, since some clients may expect response.cookies and
session.cookies to support dict operations.
Don't use the dict interface internally; it's just for compatibility with
with external client code. All `requests` code should work out of the box
with externally provided instances of CookieJar, e.g., LWPCookieJar and
FileCookieJar.
Caution: dictionary operations that are normally O(1) may be O(n).
Requests does not use the dict interface internally; it's just for
compatibility with external client code. All requests code should work
out of the box with externally provided instances of ``CookieJar``, e.g.
``LWPCookieJar`` and ``FileCookieJar``.
Unlike a regular CookieJar, this class is pickleable.
"""
.. warning:: dictionary operations that are normally O(1) may be O(n).
"""
def get(self, name, default=None, domain=None, path=None):
"""Dict-like get() that also supports optional domain and path args in
order to resolve naming collisions from using one cookie jar over
multiple domains. Caution: operation is O(n), not O(1)."""
multiple domains.
.. warning:: operation is O(n), not O(1)."""
try:
return self._find_no_duplicates(name, domain, path)
except KeyError:
@@ -199,37 +202,38 @@ class RequestsCookieJar(cookielib.CookieJar, collections.MutableMapping):
return c
def iterkeys(self):
"""Dict-like iterkeys() that returns an iterator of names of cookies from the jar.
See itervalues() and iteritems()."""
"""Dict-like iterkeys() that returns an iterator of names of cookies
from the jar. See itervalues() and iteritems()."""
for cookie in iter(self):
yield cookie.name
def keys(self):
"""Dict-like keys() that returns a list of names of cookies from the jar.
See values() and items()."""
"""Dict-like keys() that returns a list of names of cookies from the
jar. See values() and items()."""
return list(self.iterkeys())
def itervalues(self):
"""Dict-like itervalues() that returns an iterator of values of cookies from the jar.
See iterkeys() and iteritems()."""
"""Dict-like itervalues() that returns an iterator of values of cookies
from the jar. See iterkeys() and iteritems()."""
for cookie in iter(self):
yield cookie.value
def values(self):
"""Dict-like values() that returns a list of values of cookies from the jar.
See keys() and items()."""
"""Dict-like values() that returns a list of values of cookies from the
jar. See keys() and items()."""
return list(self.itervalues())
def iteritems(self):
"""Dict-like iteritems() that returns an iterator of name-value tuples from the jar.
See iterkeys() and itervalues()."""
"""Dict-like iteritems() that returns an iterator of name-value tuples
from the jar. See iterkeys() and itervalues()."""
for cookie in iter(self):
yield cookie.name, cookie.value
def items(self):
"""Dict-like items() that returns a list of name-value tuples from the jar.
See keys() and values(). Allows client-code to call "dict(RequestsCookieJar)
and get a vanilla python dict of key value pairs."""
"""Dict-like items() that returns a list of name-value tuples from the
jar. See keys() and values(). Allows client-code to call
``dict(RequestsCookieJar)`` and get a vanilla python dict of key value
pairs."""
return list(self.iteritems())
def list_domains(self):
@@ -259,8 +263,9 @@ class RequestsCookieJar(cookielib.CookieJar, collections.MutableMapping):
return False # there is only one domain in jar
def get_dict(self, domain=None, path=None):
"""Takes as an argument an optional domain and path and returns a plain old
Python dict of name-value pairs of cookies that meet the requirements."""
"""Takes as an argument an optional domain and path and returns a plain
old Python dict of name-value pairs of cookies that meet the
requirements."""
dictionary = {}
for cookie in iter(self):
if (domain is None or cookie.domain == domain) and (path is None
@@ -269,21 +274,24 @@ class RequestsCookieJar(cookielib.CookieJar, collections.MutableMapping):
return dictionary
def __getitem__(self, name):
"""Dict-like __getitem__() for compatibility with client code. Throws exception
if there are more than one cookie with name. In that case, use the more
explicit get() method instead. Caution: operation is O(n), not O(1)."""
"""Dict-like __getitem__() for compatibility with client code. Throws
exception if there are more than one cookie with name. In that case,
use the more explicit get() method instead.
.. warning:: operation is O(n), not O(1)."""
return self._find_no_duplicates(name)
def __setitem__(self, name, value):
"""Dict-like __setitem__ for compatibility with client code. Throws exception
if there is already a cookie of that name in the jar. In that case, use the more
explicit set() method instead."""
"""Dict-like __setitem__ for compatibility with client code. Throws
exception if there is already a cookie of that name in the jar. In that
case, use the more explicit set() method instead."""
self.set(name, value)
def __delitem__(self, name):
"""Deletes a cookie given a name. Wraps cookielib.CookieJar's remove_cookie_by_name()."""
"""Deletes a cookie given a name. Wraps ``cookielib.CookieJar``'s
``remove_cookie_by_name()``."""
remove_cookie_by_name(self, name)
def set_cookie(self, cookie, *args, **kwargs):
@@ -295,15 +303,16 @@ class RequestsCookieJar(cookielib.CookieJar, collections.MutableMapping):
"""Updates this jar with cookies from another CookieJar or dict-like"""
if isinstance(other, cookielib.CookieJar):
for cookie in other:
self.set_cookie(cookie)
self.set_cookie(copy.copy(cookie))
else:
super(RequestsCookieJar, self).update(other)
def _find(self, name, domain=None, path=None):
"""Requests uses this method internally to get cookie values. Takes as args name
and optional domain and path. Returns a cookie.value. If there are conflicting cookies,
_find arbitrarily chooses one. See _find_no_duplicates if you want an exception thrown
if there are conflicting cookies."""
"""Requests uses this method internally to get cookie values. Takes as
args name and optional domain and path. Returns a cookie.value. If
there are conflicting cookies, _find arbitrarily chooses one. See
_find_no_duplicates if you want an exception thrown if there are
conflicting cookies."""
for cookie in iter(self):
if cookie.name == name:
if domain is None or cookie.domain == domain:
@@ -313,10 +322,11 @@ class RequestsCookieJar(cookielib.CookieJar, collections.MutableMapping):
raise KeyError('name=%r, domain=%r, path=%r' % (name, domain, path))
def _find_no_duplicates(self, name, domain=None, path=None):
"""__get_item__ and get call _find_no_duplicates -- never used in Requests internally.
Takes as args name and optional domain and path. Returns a cookie.value.
Throws KeyError if cookie is not found and CookieConflictError if there are
multiple cookies that match name and optionally domain and path."""
"""Both ``__get_item__`` and ``get`` call this function: it's never
used elsewhere in Requests. Takes as args name and optional domain and
path. Returns a cookie.value. Throws KeyError if cookie is not found
and CookieConflictError if there are multiple cookies that match name
and optionally domain and path."""
toReturn = None
for cookie in iter(self):
if cookie.name == name:
@@ -350,6 +360,21 @@ class RequestsCookieJar(cookielib.CookieJar, collections.MutableMapping):
return new_cj
def _copy_cookie_jar(jar):
if jar is None:
return None
if hasattr(jar, 'copy'):
# We're dealing with an instane of RequestsCookieJar
return jar.copy()
# We're dealing with a generic CookieJar instance
new_jar = copy.copy(jar)
new_jar.clear()
for cookie in jar:
new_jar.set_cookie(copy.copy(cookie))
return new_jar
def create_cookie(name, value, **kwargs):
"""Make a cookie from underspecified parameters.
@@ -390,11 +415,14 @@ def morsel_to_cookie(morsel):
expires = None
if morsel['max-age']:
expires = time.time() + morsel['max-age']
try:
expires = int(time.time() + int(morsel['max-age']))
except ValueError:
raise TypeError('max-age: %s must be integer' % morsel['max-age'])
elif morsel['expires']:
time_template = '%a, %d-%b-%Y %H:%M:%S GMT'
expires = time.mktime(
time.strptime(morsel['expires'], time_template)) - time.timezone
expires = int(time.mktime(
time.strptime(morsel['expires'], time_template)) - time.timezone)
return create_cookie(
comment=morsel['comment'],
comment_url=bool(morsel['comment']),
@@ -440,7 +468,7 @@ def merge_cookies(cookiejar, cookies):
"""
if not isinstance(cookiejar, cookielib.CookieJar):
raise ValueError('You can only merge into CookieJar')
if isinstance(cookies, dict):
cookiejar = cookiejar_from_dict(
cookies, cookiejar=cookiejar, overwrite=False)

View File

@@ -15,7 +15,7 @@ from .hooks import default_hooks
from .structures import CaseInsensitiveDict
from .auth import HTTPBasicAuth
from .cookies import cookiejar_from_dict, get_cookie_header
from .cookies import cookiejar_from_dict, get_cookie_header, _copy_cookie_jar
from .packages.urllib3.fields import RequestField
from .packages.urllib3.filepost import encode_multipart_formdata
from .packages.urllib3.util import parse_url
@@ -30,7 +30,8 @@ from .utils import (
iter_slices, guess_json_utf, super_len, to_native_string)
from .compat import (
cookielib, urlunparse, urlsplit, urlencode, str, bytes, StringIO,
is_py2, chardet, json, builtin_str, basestring)
is_py2, chardet, builtin_str, basestring)
from .compat import json as complexjson
from .status_codes import codes
#: The set of HTTP status codes that indicate an automatically
@@ -42,12 +43,11 @@ REDIRECT_STATI = (
codes.temporary_redirect, # 307
codes.permanent_redirect, # 308
)
DEFAULT_REDIRECT_LIMIT = 30
CONTENT_CHUNK_SIZE = 10 * 1024
ITER_CHUNK_SIZE = 512
json_dumps = json.dumps
class RequestEncodingMixin(object):
@property
@@ -143,13 +143,13 @@ class RequestEncodingMixin(object):
else:
fn = guess_filename(v) or k
fp = v
if isinstance(fp, str):
fp = StringIO(fp)
if isinstance(fp, bytes):
fp = BytesIO(fp)
rf = RequestField(name=k, data=fp.read(),
filename=fn, headers=fh)
if isinstance(fp, (str, bytes, bytearray)):
fdata = fp
else:
fdata = fp.read()
rf = RequestField(name=k, data=fdata, filename=fn, headers=fh)
rf.make_multipart(content_type=ft)
new_fields.append(rf)
@@ -206,17 +206,8 @@ class Request(RequestHooksMixin):
<PreparedRequest [GET]>
"""
def __init__(self,
method=None,
url=None,
headers=None,
files=None,
data=None,
params=None,
auth=None,
cookies=None,
hooks=None,
json=None):
def __init__(self, method=None, url=None, headers=None, files=None,
data=None, params=None, auth=None, cookies=None, hooks=None, json=None):
# Default empty dicts for dict params.
data = [] if data is None else data
@@ -295,8 +286,7 @@ class PreparedRequest(RequestEncodingMixin, RequestHooksMixin):
self.hooks = default_hooks()
def prepare(self, method=None, url=None, headers=None, files=None,
data=None, params=None, auth=None, cookies=None, hooks=None,
json=None):
data=None, params=None, auth=None, cookies=None, hooks=None, json=None):
"""Prepares the entire request with the given parameters."""
self.prepare_method(method)
@@ -305,6 +295,7 @@ class PreparedRequest(RequestEncodingMixin, RequestHooksMixin):
self.prepare_cookies(cookies)
self.prepare_body(data, files, json)
self.prepare_auth(auth, url)
# Note that prepare_auth must be last to enable authentication schemes
# such as OAuth to work on a fully prepared request.
@@ -319,7 +310,7 @@ class PreparedRequest(RequestEncodingMixin, RequestHooksMixin):
p.method = self.method
p.url = self.url
p.headers = self.headers.copy() if self.headers is not None else None
p._cookies = self._cookies.copy() if self._cookies is not None else None
p._cookies = _copy_cookie_jar(self._cookies)
p.body = self.body
p.hooks = self.hooks
return p
@@ -356,8 +347,10 @@ class PreparedRequest(RequestEncodingMixin, RequestHooksMixin):
raise InvalidURL(*e.args)
if not scheme:
raise MissingSchema("Invalid URL {0!r}: No schema supplied. "
"Perhaps you meant http://{0}?".format(url))
error = ("Invalid URL {0!r}: No schema supplied. Perhaps you meant http://{0}?")
error = error.format(to_native_string(url, 'utf8'))
raise MissingSchema(error)
if not host:
raise InvalidURL("Invalid URL %r: No host supplied" % url)
@@ -423,7 +416,7 @@ class PreparedRequest(RequestEncodingMixin, RequestHooksMixin):
if json is not None:
content_type = 'application/json'
body = json_dumps(json)
body = complexjson.dumps(json)
is_stream = all([
hasattr(data, '__iter__'),
@@ -500,7 +493,15 @@ class PreparedRequest(RequestEncodingMixin, RequestHooksMixin):
self.prepare_content_length(self.body)
def prepare_cookies(self, cookies):
"""Prepares the given HTTP cookie data."""
"""Prepares the given HTTP cookie data.
This function eventually generates a ``Cookie`` header from the
given cookies using cookielib. Due to cookielib's design, the header
will not be regenerated if it already exists, meaning this function
can only be called once for the life of the
:class:`PreparedRequest <PreparedRequest>` object. Any subsequent calls
to ``prepare_cookies`` will have no actual effect, unless the "Cookie"
header is removed beforehand."""
if isinstance(cookies, cookielib.CookieJar):
self._cookies = cookies
@@ -513,6 +514,10 @@ class PreparedRequest(RequestEncodingMixin, RequestHooksMixin):
def prepare_hooks(self, hooks):
"""Prepares the given hooks."""
# hooks can be passed as None to the prepare method and to this
# method. To prevent iterating over None, simply use an empty list
# if hooks is False-y
hooks = hooks or []
for event in hooks:
self.register_hook(event, hooks[event])
@@ -523,16 +528,8 @@ class Response(object):
"""
__attrs__ = [
'_content',
'status_code',
'headers',
'url',
'history',
'encoding',
'reason',
'cookies',
'elapsed',
'request',
'_content', 'status_code', 'headers', 'url', 'history',
'encoding', 'reason', 'cookies', 'elapsed', 'request'
]
def __init__(self):
@@ -572,7 +569,11 @@ class Response(object):
self.cookies = cookiejar_from_dict({})
#: The amount of time elapsed between sending the request
#: and the arrival of the response (as a timedelta)
#: and the arrival of the response (as a timedelta).
#: This property specifically measures the time taken between sending
#: the first byte of the request and finishing parsing the headers. It
#: is therefore unaffected by consuming the response content or the
#: value of the ``stream`` keyword argument.
self.elapsed = datetime.timedelta(0)
#: The :class:`PreparedRequest <PreparedRequest>` object to which this
@@ -648,9 +649,10 @@ class Response(object):
If decode_unicode is True, content will be decoded using the best
available encoding based on the response.
"""
def generate():
try:
# Special case for urllib3.
# Special case for urllib3.
if hasattr(self.raw, 'stream'):
try:
for chunk in self.raw.stream(chunk_size, decode_content=True):
yield chunk
@@ -660,7 +662,7 @@ class Response(object):
raise ContentDecodingError(e)
except ReadTimeoutError as e:
raise ConnectionError(e)
except AttributeError:
else:
# Standard file-like object.
while True:
chunk = self.raw.read(chunk_size)
@@ -688,6 +690,8 @@ class Response(object):
"""Iterates over the response data, one line at a time. When
stream=True is set on the request, this avoids reading the
content at once into memory for large responses.
.. note:: This method is not reentrant safe.
"""
pending = None
@@ -789,14 +793,16 @@ class Response(object):
encoding = guess_json_utf(self.content)
if encoding is not None:
try:
return json.loads(self.content.decode(encoding), **kwargs)
return complexjson.loads(
self.content.decode(encoding), **kwargs
)
except UnicodeDecodeError:
# Wrong UTF codec detected; usually because it's not UTF-8
# but some other 8-bit codec. This is an RFC violation,
# and the server didn't bother to tell us what codec *was*
# used.
pass
return json.loads(self.text, **kwargs)
return complexjson.loads(self.text, **kwargs)
@property
def links(self):
@@ -822,10 +828,10 @@ class Response(object):
http_error_msg = ''
if 400 <= self.status_code < 500:
http_error_msg = '%s Client Error: %s' % (self.status_code, self.reason)
http_error_msg = '%s Client Error: %s for url: %s' % (self.status_code, self.reason, self.url)
elif 500 <= self.status_code < 600:
http_error_msg = '%s Server Error: %s' % (self.status_code, self.reason)
http_error_msg = '%s Server Error: %s for url: %s' % (self.status_code, self.reason, self.url)
if http_error_msg:
raise HTTPError(http_error_msg, response=self)
@@ -836,4 +842,7 @@ class Response(object):
*Note: Should not normally need to be called explicitly.*
"""
if not self._content_consumed:
return self.raw.close()
return self.raw.release_conn()

View File

@@ -0,0 +1,8 @@
If you are planning to submit a pull request to requests with any changes in
this library do not go any further. These are independent libraries which we
vendor into requests. Any changes necessary to these libraries must be made in
them and submitted as separate pull requests to those libraries.
urllib3 pull requests go here: https://github.com/shazow/urllib3
chardet pull requests go here: https://github.com/chardet/chardet

View File

@@ -55,9 +55,14 @@ def add_stderr_logger(level=logging.DEBUG):
del NullHandler
# Set security warning to only go off once by default.
import warnings
warnings.simplefilter('always', exceptions.SecurityWarning)
# SecurityWarning's always go off by default.
warnings.simplefilter('always', exceptions.SecurityWarning, append=True)
# SubjectAltNameWarning's should go off once per host
warnings.simplefilter('default', exceptions.SubjectAltNameWarning)
# InsecurePlatformWarning's don't vary between requests, so we keep it default.
warnings.simplefilter('default', exceptions.InsecurePlatformWarning,
append=True)
def disable_warnings(category=exceptions.HTTPWarning):
"""

View File

@@ -1,7 +1,7 @@
from collections import Mapping, MutableMapping
try:
from threading import RLock
except ImportError: # Platform-specific: No threads available
except ImportError: # Platform-specific: No threads available
class RLock:
def __enter__(self):
pass
@@ -10,11 +10,11 @@ except ImportError: # Platform-specific: No threads available
pass
try: # Python 2.7+
try: # Python 2.7+
from collections import OrderedDict
except ImportError:
from .packages.ordered_dict import OrderedDict
from .packages.six import iterkeys, itervalues
from .packages.six import iterkeys, itervalues, PY3
__all__ = ['RecentlyUsedContainer', 'HTTPHeaderDict']
@@ -129,25 +129,82 @@ class HTTPHeaderDict(MutableMapping):
'foo=bar, baz=quxx'
>>> headers['Content-Length']
'7'
If you want to access the raw headers with their original casing
for debugging purposes you can access the private ``._data`` attribute
which is a normal python ``dict`` that maps the case-insensitive key to a
list of tuples stored as (case-sensitive-original-name, value). Using the
structure from above as our example:
>>> headers._data
{'set-cookie': [('Set-Cookie', 'foo=bar'), ('set-cookie', 'baz=quxx')],
'content-length': [('content-length', '7')]}
"""
def __init__(self, headers=None, **kwargs):
self._data = {}
if headers is None:
headers = {}
self.update(headers, **kwargs)
super(HTTPHeaderDict, self).__init__()
self._container = {}
if headers is not None:
if isinstance(headers, HTTPHeaderDict):
self._copy_from(headers)
else:
self.extend(headers)
if kwargs:
self.extend(kwargs)
def add(self, key, value):
def __setitem__(self, key, val):
self._container[key.lower()] = (key, val)
return self._container[key.lower()]
def __getitem__(self, key):
val = self._container[key.lower()]
return ', '.join(val[1:])
def __delitem__(self, key):
del self._container[key.lower()]
def __contains__(self, key):
return key.lower() in self._container
def __eq__(self, other):
if not isinstance(other, Mapping) and not hasattr(other, 'keys'):
return False
if not isinstance(other, type(self)):
other = type(self)(other)
return (dict((k.lower(), v) for k, v in self.itermerged()) ==
dict((k.lower(), v) for k, v in other.itermerged()))
def __ne__(self, other):
return not self.__eq__(other)
if not PY3: # Python 2
iterkeys = MutableMapping.iterkeys
itervalues = MutableMapping.itervalues
__marker = object()
def __len__(self):
return len(self._container)
def __iter__(self):
# Only provide the originally cased names
for vals in self._container.values():
yield vals[0]
def pop(self, key, default=__marker):
'''D.pop(k[,d]) -> v, remove specified key and return the corresponding value.
If key is not found, d is returned if given, otherwise KeyError is raised.
'''
# Using the MutableMapping function directly fails due to the private marker.
# Using ordinary dict.pop would expose the internal structures.
# So let's reinvent the wheel.
try:
value = self[key]
except KeyError:
if default is self.__marker:
raise
return default
else:
del self[key]
return value
def discard(self, key):
try:
del self[key]
except KeyError:
pass
def add(self, key, val):
"""Adds a (name, value) pair, doesn't overwrite the value if it already
exists.
@@ -156,43 +213,111 @@ class HTTPHeaderDict(MutableMapping):
>>> headers['foo']
'bar, baz'
"""
self._data.setdefault(key.lower(), []).append((key, value))
key_lower = key.lower()
new_vals = key, val
# Keep the common case aka no item present as fast as possible
vals = self._container.setdefault(key_lower, new_vals)
if new_vals is not vals:
# new_vals was not inserted, as there was a previous one
if isinstance(vals, list):
# If already several items got inserted, we have a list
vals.append(val)
else:
# vals should be a tuple then, i.e. only one item so far
# Need to convert the tuple to list for further extension
self._container[key_lower] = [vals[0], vals[1], val]
def extend(self, *args, **kwargs):
"""Generic import function for any type of header-like object.
Adapted version of MutableMapping.update in order to insert items
with self.add instead of self.__setitem__
"""
if len(args) > 1:
raise TypeError("extend() takes at most 1 positional "
"arguments ({} given)".format(len(args)))
other = args[0] if len(args) >= 1 else ()
if isinstance(other, HTTPHeaderDict):
for key, val in other.iteritems():
self.add(key, val)
elif isinstance(other, Mapping):
for key in other:
self.add(key, other[key])
elif hasattr(other, "keys"):
for key in other.keys():
self.add(key, other[key])
else:
for key, value in other:
self.add(key, value)
for key, value in kwargs.items():
self.add(key, value)
def getlist(self, key):
"""Returns a list of all the values for the named field. Returns an
empty list if the key doesn't exist."""
return self[key].split(', ') if key in self else []
try:
vals = self._container[key.lower()]
except KeyError:
return []
else:
if isinstance(vals, tuple):
return [vals[1]]
else:
return vals[1:]
def copy(self):
h = HTTPHeaderDict()
for key in self._data:
for rawkey, value in self._data[key]:
h.add(rawkey, value)
return h
def __eq__(self, other):
if not isinstance(other, Mapping):
return False
other = HTTPHeaderDict(other)
return dict((k1, self[k1]) for k1 in self._data) == \
dict((k2, other[k2]) for k2 in other._data)
def __getitem__(self, key):
values = self._data[key.lower()]
return ', '.join(value[1] for value in values)
def __setitem__(self, key, value):
self._data[key.lower()] = [(key, value)]
def __delitem__(self, key):
del self._data[key.lower()]
def __len__(self):
return len(self._data)
def __iter__(self):
for headers in itervalues(self._data):
yield headers[0][0]
# Backwards compatibility for httplib
getheaders = getlist
getallmatchingheaders = getlist
iget = getlist
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, dict(self.items()))
return "%s(%s)" % (type(self).__name__, dict(self.itermerged()))
def _copy_from(self, other):
for key in other:
val = other.getlist(key)
if isinstance(val, list):
# Don't need to convert tuples
val = list(val)
self._container[key.lower()] = [key] + val
def copy(self):
clone = type(self)()
clone._copy_from(self)
return clone
def iteritems(self):
"""Iterate over all header lines, including duplicate ones."""
for key in self:
vals = self._container[key.lower()]
for val in vals[1:]:
yield vals[0], val
def itermerged(self):
"""Iterate over all headers, merging duplicate ones together."""
for key in self:
val = self._container[key.lower()]
yield val[0], ', '.join(val[1:])
def items(self):
return list(self.iteritems())
@classmethod
def from_httplib(cls, message): # Python 2
"""Read headers from a Python 2 httplib message object."""
# python2.7 does not expose a proper API for exporting multiheaders
# efficiently. This function re-reads raw lines from the message
# object and extracts the multiheaders properly.
headers = []
for line in message.headers:
if line.startswith((' ', '\t')):
key, value = headers[-1]
headers[-1] = (key, value + '\r\n' + line.rstrip())
continue
key, value = line.split(':', 1)
headers.append((key, value.strip()))
return cls(headers)

View File

@@ -1,7 +1,7 @@
import datetime
import sys
import socket
from socket import timeout as SocketTimeout
from socket import error as SocketError, timeout as SocketTimeout
import warnings
from .packages import six
@@ -36,9 +36,10 @@ except NameError: # Python 2:
from .exceptions import (
NewConnectionError,
ConnectTimeoutError,
SubjectAltNameWarning,
SystemTimeWarning,
SecurityWarning,
)
from .packages.ssl_match_hostname import match_hostname
@@ -133,11 +134,15 @@ class HTTPConnection(_HTTPConnection, object):
conn = connection.create_connection(
(self.host, self.port), self.timeout, **extra_kw)
except SocketTimeout:
except SocketTimeout as e:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
except SocketError as e:
raise NewConnectionError(
self, "Failed to establish a new connection: %s" % e)
return conn
def _prepare_conn(self, conn):
@@ -185,17 +190,23 @@ class VerifiedHTTPSConnection(HTTPSConnection):
"""
cert_reqs = None
ca_certs = None
ca_cert_dir = None
ssl_version = None
assert_fingerprint = None
def set_cert(self, key_file=None, cert_file=None,
cert_reqs=None, ca_certs=None,
assert_hostname=None, assert_fingerprint=None):
assert_hostname=None, assert_fingerprint=None,
ca_cert_dir=None):
if (ca_certs or ca_cert_dir) and cert_reqs is None:
cert_reqs = 'CERT_REQUIRED'
self.key_file = key_file
self.cert_file = cert_file
self.cert_reqs = cert_reqs
self.ca_certs = ca_certs
self.ca_cert_dir = ca_cert_dir
self.assert_hostname = assert_hostname
self.assert_fingerprint = assert_fingerprint
@@ -234,6 +245,7 @@ class VerifiedHTTPSConnection(HTTPSConnection):
self.sock = ssl_wrap_socket(conn, self.key_file, self.cert_file,
cert_reqs=resolved_cert_reqs,
ca_certs=self.ca_certs,
ca_cert_dir=self.ca_cert_dir,
server_hostname=hostname,
ssl_version=resolved_ssl_version)
@@ -245,10 +257,11 @@ class VerifiedHTTPSConnection(HTTPSConnection):
cert = self.sock.getpeercert()
if not cert.get('subjectAltName', ()):
warnings.warn((
'Certificate has no `subjectAltName`, falling back to check for a `commonName` for now. '
'This feature is being removed by major browsers and deprecated by RFC 2818. '
'(See https://github.com/shazow/urllib3/issues/497 for details.)'),
SecurityWarning
'Certificate for {0} has no `subjectAltName`, falling back to check for a '
'`commonName` for now. This feature is being removed by major browsers and '
'deprecated by RFC 2818. (See https://github.com/shazow/urllib3/issues/497 '
'for details.)'.format(hostname)),
SubjectAltNameWarning
)
match_hostname(cert, self.assert_hostname or hostname)
@@ -260,3 +273,5 @@ if ssl:
# Make a copy for testing.
UnverifiedHTTPSConnection = HTTPSConnection
HTTPSConnection = VerifiedHTTPSConnection
else:
HTTPSConnection = DummyConnection

View File

@@ -17,14 +17,17 @@ from .exceptions import (
ClosedPoolError,
ProtocolError,
EmptyPoolError,
HeaderParsingError,
HostChangedError,
LocationValueError,
MaxRetryError,
ProxyError,
ConnectTimeoutError,
ReadTimeoutError,
SSLError,
TimeoutError,
InsecureRequestWarning,
NewConnectionError,
)
from .packages.ssl_match_hostname import CertificateError
from .packages import six
@@ -38,9 +41,10 @@ from .request import RequestMethods
from .response import HTTPResponse
from .util.connection import is_connection_dropped
from .util.response import assert_header_parsing
from .util.retry import Retry
from .util.timeout import Timeout
from .util.url import get_host
from .util.url import get_host, Url
xrange = six.moves.xrange
@@ -72,6 +76,21 @@ class ConnectionPool(object):
return '%s(host=%r, port=%r)' % (type(self).__name__,
self.host, self.port)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.close()
# Return False to re-raise any potential exceptions
return False
def close():
"""
Close all pooled connections and disable the pool.
"""
pass
# This is taken from http://hg.python.org/cpython/file/7aaba721ebc0/Lib/socket.py#l252
_blocking_errnos = set([errno.EAGAIN, errno.EWOULDBLOCK])
@@ -105,7 +124,7 @@ class HTTPConnectionPool(ConnectionPool, RequestMethods):
:param maxsize:
Number of connections to save that can be reused. More than 1 is useful
in multithreaded situations. If ``block`` is set to false, more
in multithreaded situations. If ``block`` is set to False, more
connections will be created but they will not be saved once they've
been used.
@@ -266,6 +285,10 @@ class HTTPConnectionPool(ConnectionPool, RequestMethods):
"""
pass
def _prepare_proxy(self, conn):
# Nothing to do for HTTP connections.
pass
def _get_timeout(self, timeout):
""" Helper that always returns a :class:`urllib3.util.Timeout` """
if timeout is _Default:
@@ -349,7 +372,7 @@ class HTTPConnectionPool(ConnectionPool, RequestMethods):
# Receive the response from the server
try:
try: # Python 2.7+, use buffering of HTTP responses
try: # Python 2.7, use buffering of HTTP responses
httplib_response = conn.getresponse(buffering=True)
except TypeError: # Python 2.6 and older
httplib_response = conn.getresponse()
@@ -362,8 +385,19 @@ class HTTPConnectionPool(ConnectionPool, RequestMethods):
log.debug("\"%s %s %s\" %s %s" % (method, url, http_version,
httplib_response.status,
httplib_response.length))
try:
assert_header_parsing(httplib_response.msg)
except HeaderParsingError as hpe: # Platform-specific: Python 3
log.warning(
'Failed to parse headers (url=%s): %s',
self._absolute_url(url), hpe, exc_info=True)
return httplib_response
def _absolute_url(self, path):
return Url(scheme=self.scheme, host=self.host, port=self.port, path=path).url
def close(self):
"""
Close all pooled connections and disable the pool.
@@ -510,11 +544,18 @@ class HTTPConnectionPool(ConnectionPool, RequestMethods):
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout,
timeout=timeout_obj,
body=body, headers=headers)
# If we're going to release the connection in ``finally:``, then
@@ -542,26 +583,30 @@ class HTTPConnectionPool(ConnectionPool, RequestMethods):
# Close the connection. If a connection is reused on which there
# was a Certificate error, the next request will certainly raise
# another Certificate error.
if conn:
conn.close()
conn = None
conn = conn and conn.close()
release_conn = True
raise SSLError(e)
except (TimeoutError, HTTPException, SocketError, ConnectionError) as e:
if conn:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
conn.close()
conn = None
except SSLError:
# Treat SSLError separately from BaseSSLError to preserve
# traceback.
conn = conn and conn.close()
release_conn = True
raise
stacktrace = sys.exc_info()[2]
if isinstance(e, SocketError) and self.proxy:
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
conn = conn and conn.close()
release_conn = True
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
e = ProxyError('Cannot connect to proxy.', e)
elif isinstance(e, (SocketError, HTTPException)):
e = ProtocolError('Connection aborted.', e)
retries = retries.increment(method, url, error=e,
_pool=self, _stacktrace=stacktrace)
retries = retries.increment(method, url, error=e, _pool=self,
_stacktrace=sys.exc_info()[2])
retries.sleep()
# Keep track of the error for the retry warning.
@@ -593,6 +638,9 @@ class HTTPConnectionPool(ConnectionPool, RequestMethods):
retries = retries.increment(method, url, response=response, _pool=self)
except MaxRetryError:
if retries.raise_on_redirect:
# Release the connection for this response, since we're not
# returning it to be released manually.
response.release_conn()
raise
return response
@@ -629,10 +677,10 @@ class HTTPSConnectionPool(HTTPConnectionPool):
``assert_hostname`` and ``host`` in this order to verify connections.
If ``assert_hostname`` is False, no verification is done.
The ``key_file``, ``cert_file``, ``cert_reqs``, ``ca_certs`` and
``ssl_version`` are only used if :mod:`ssl` is available and are fed into
:meth:`urllib3.util.ssl_wrap_socket` to upgrade the connection socket
into an SSL socket.
The ``key_file``, ``cert_file``, ``cert_reqs``, ``ca_certs``,
``ca_cert_dir``, and ``ssl_version`` are only used if :mod:`ssl` is
available and are fed into :meth:`urllib3.util.ssl_wrap_socket` to upgrade
the connection socket into an SSL socket.
"""
scheme = 'https'
@@ -645,15 +693,20 @@ class HTTPSConnectionPool(HTTPConnectionPool):
key_file=None, cert_file=None, cert_reqs=None,
ca_certs=None, ssl_version=None,
assert_hostname=None, assert_fingerprint=None,
**conn_kw):
ca_cert_dir=None, **conn_kw):
HTTPConnectionPool.__init__(self, host, port, strict, timeout, maxsize,
block, headers, retries, _proxy, _proxy_headers,
**conn_kw)
if ca_certs and cert_reqs is None:
cert_reqs = 'CERT_REQUIRED'
self.key_file = key_file
self.cert_file = cert_file
self.cert_reqs = cert_reqs
self.ca_certs = ca_certs
self.ca_cert_dir = ca_cert_dir
self.ssl_version = ssl_version
self.assert_hostname = assert_hostname
self.assert_fingerprint = assert_fingerprint
@@ -669,28 +722,31 @@ class HTTPSConnectionPool(HTTPConnectionPool):
cert_file=self.cert_file,
cert_reqs=self.cert_reqs,
ca_certs=self.ca_certs,
ca_cert_dir=self.ca_cert_dir,
assert_hostname=self.assert_hostname,
assert_fingerprint=self.assert_fingerprint)
conn.ssl_version = self.ssl_version
if self.proxy is not None:
# Python 2.7+
try:
set_tunnel = conn.set_tunnel
except AttributeError: # Platform-specific: Python 2.6
set_tunnel = conn._set_tunnel
if sys.version_info <= (2, 6, 4) and not self.proxy_headers: # Python 2.6.4 and older
set_tunnel(self.host, self.port)
else:
set_tunnel(self.host, self.port, self.proxy_headers)
# Establish tunnel connection early, because otherwise httplib
# would improperly set Host: header to proxy's IP:port.
conn.connect()
return conn
def _prepare_proxy(self, conn):
"""
Establish tunnel connection early, because otherwise httplib
would improperly set Host: header to proxy's IP:port.
"""
# Python 2.7+
try:
set_tunnel = conn.set_tunnel
except AttributeError: # Platform-specific: Python 2.6
set_tunnel = conn._set_tunnel
if sys.version_info <= (2, 6, 4) and not self.proxy_headers: # Python 2.6.4 and older
set_tunnel(self.host, self.port)
else:
set_tunnel(self.host, self.port, self.proxy_headers)
conn.connect()
def _new_conn(self):
"""
Return a fresh :class:`httplib.HTTPSConnection`.
@@ -700,7 +756,6 @@ class HTTPSConnectionPool(HTTPConnectionPool):
% (self.num_connections, self.host))
if not self.ConnectionCls or self.ConnectionCls is DummyConnection:
# Platform-specific: Python without ssl
raise SSLError("Can't connect to HTTPS URL because the SSL "
"module is not available.")

View File

@@ -0,0 +1,222 @@
import logging
import os
import warnings
from ..exceptions import (
HTTPError,
HTTPWarning,
MaxRetryError,
ProtocolError,
TimeoutError,
SSLError
)
from ..packages.six import BytesIO
from ..request import RequestMethods
from ..response import HTTPResponse
from ..util.timeout import Timeout
from ..util.retry import Retry
try:
from google.appengine.api import urlfetch
except ImportError:
urlfetch = None
log = logging.getLogger(__name__)
class AppEnginePlatformWarning(HTTPWarning):
pass
class AppEnginePlatformError(HTTPError):
pass
class AppEngineManager(RequestMethods):
"""
Connection manager for Google App Engine sandbox applications.
This manager uses the URLFetch service directly instead of using the
emulated httplib, and is subject to URLFetch limitations as described in
the App Engine documentation here:
https://cloud.google.com/appengine/docs/python/urlfetch
Notably it will raise an AppEnginePlatformError if:
* URLFetch is not available.
* If you attempt to use this on GAEv2 (Managed VMs), as full socket
support is available.
* If a request size is more than 10 megabytes.
* If a response size is more than 32 megabtyes.
* If you use an unsupported request method such as OPTIONS.
Beyond those cases, it will raise normal urllib3 errors.
"""
def __init__(self, headers=None, retries=None, validate_certificate=True):
if not urlfetch:
raise AppEnginePlatformError(
"URLFetch is not available in this environment.")
if is_prod_appengine_v2():
raise AppEnginePlatformError(
"Use normal urllib3.PoolManager instead of AppEngineManager"
"on Managed VMs, as using URLFetch is not necessary in "
"this environment.")
warnings.warn(
"urllib3 is using URLFetch on Google App Engine sandbox instead "
"of sockets. To use sockets directly instead of URLFetch see "
"https://urllib3.readthedocs.org/en/latest/contrib.html.",
AppEnginePlatformWarning)
RequestMethods.__init__(self, headers)
self.validate_certificate = validate_certificate
self.retries = retries or Retry.DEFAULT
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
# Return False to re-raise any potential exceptions
return False
def urlopen(self, method, url, body=None, headers=None,
retries=None, redirect=True, timeout=Timeout.DEFAULT_TIMEOUT,
**response_kw):
retries = self._get_retries(retries, redirect)
try:
response = urlfetch.fetch(
url,
payload=body,
method=method,
headers=headers or {},
allow_truncated=False,
follow_redirects=(
redirect and
retries.redirect != 0 and
retries.total),
deadline=self._get_absolute_timeout(timeout),
validate_certificate=self.validate_certificate,
)
except urlfetch.DeadlineExceededError as e:
raise TimeoutError(self, e)
except urlfetch.InvalidURLError as e:
if 'too large' in e.message:
raise AppEnginePlatformError(
"URLFetch request too large, URLFetch only "
"supports requests up to 10mb in size.", e)
raise ProtocolError(e)
except urlfetch.DownloadError as e:
if 'Too many redirects' in e.message:
raise MaxRetryError(self, url, reason=e)
raise ProtocolError(e)
except urlfetch.ResponseTooLargeError as e:
raise AppEnginePlatformError(
"URLFetch response too large, URLFetch only supports"
"responses up to 32mb in size.", e)
except urlfetch.SSLCertificateError as e:
raise SSLError(e)
except urlfetch.InvalidMethodError as e:
raise AppEnginePlatformError(
"URLFetch does not support method: %s" % method, e)
http_response = self._urlfetch_response_to_http_response(
response, **response_kw)
# Check for redirect response
if (http_response.get_redirect_location() and
retries.raise_on_redirect and redirect):
raise MaxRetryError(self, url, "too many redirects")
# Check if we should retry the HTTP response.
if retries.is_forced_retry(method, status_code=http_response.status):
retries = retries.increment(
method, url, response=http_response, _pool=self)
log.info("Forced retry: %s" % url)
retries.sleep()
return self.urlopen(
method, url,
body=body, headers=headers,
retries=retries, redirect=redirect,
timeout=timeout, **response_kw)
return http_response
def _urlfetch_response_to_http_response(self, urlfetch_resp, **response_kw):
if is_prod_appengine_v1():
# Production GAE handles deflate encoding automatically, but does
# not remove the encoding header.
content_encoding = urlfetch_resp.headers.get('content-encoding')
if content_encoding == 'deflate':
del urlfetch_resp.headers['content-encoding']
return HTTPResponse(
# In order for decoding to work, we must present the content as
# a file-like object.
body=BytesIO(urlfetch_resp.content),
headers=urlfetch_resp.headers,
status=urlfetch_resp.status_code,
**response_kw
)
def _get_absolute_timeout(self, timeout):
if timeout is Timeout.DEFAULT_TIMEOUT:
return 5 # 5s is the default timeout for URLFetch.
if isinstance(timeout, Timeout):
if not timeout.read is timeout.connect:
warnings.warn(
"URLFetch does not support granular timeout settings, "
"reverting to total timeout.", AppEnginePlatformWarning)
return timeout.total
return timeout
def _get_retries(self, retries, redirect):
if not isinstance(retries, Retry):
retries = Retry.from_int(
retries, redirect=redirect, default=self.retries)
if retries.connect or retries.read or retries.redirect:
warnings.warn(
"URLFetch only supports total retries and does not "
"recognize connect, read, or redirect retry parameters.",
AppEnginePlatformWarning)
return retries
def is_appengine():
return (is_local_appengine() or
is_prod_appengine_v1() or
is_prod_appengine_v2())
def is_appengine_sandbox():
return is_appengine() and not is_prod_appengine_v2()
def is_local_appengine():
return ('APPENGINE_RUNTIME' in os.environ and
'Development/' in os.environ['SERVER_SOFTWARE'])
def is_prod_appengine_v1():
return ('APPENGINE_RUNTIME' in os.environ and
'Google App Engine/' in os.environ['SERVER_SOFTWARE'] and
not is_prod_appengine_v2())
def is_prod_appengine_v2():
return os.environ.get('GAE_VM', False) == 'true'

View File

@@ -38,8 +38,6 @@ Module Variables
----------------
:var DEFAULT_SSL_CIPHER_LIST: The list of supported SSL/TLS cipher suites.
Default: ``ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:
ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS``
.. _sni: https://en.wikipedia.org/wiki/Server_Name_Indication
.. _crime attack: https://en.wikipedia.org/wiki/CRIME_(security_exploit)
@@ -85,23 +83,16 @@ _openssl_verify = {
+ OpenSSL.SSL.VERIFY_FAIL_IF_NO_PEER_CERT,
}
# A secure default.
# Sources for more information on TLS ciphers:
#
# - https://wiki.mozilla.org/Security/Server_Side_TLS
# - https://www.ssllabs.com/projects/best-practices/index.html
# - https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
#
# The general intent is:
# - Prefer cipher suites that offer perfect forward secrecy (DHE/ECDHE),
# - prefer ECDHE over DHE for better performance,
# - prefer any AES-GCM over any AES-CBC for better performance and security,
# - use 3DES as fallback which is secure but slow,
# - disable NULL authentication, MD5 MACs and DSS for security reasons.
DEFAULT_SSL_CIPHER_LIST = "ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:" + \
"ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:" + \
"!aNULL:!MD5:!DSS"
DEFAULT_SSL_CIPHER_LIST = util.ssl_.DEFAULT_CIPHERS
# OpenSSL will only write 16K at a time
SSL_WRITE_BLOCKSIZE = 16384
try:
_ = memoryview
has_memoryview = True
except NameError:
has_memoryview = False
orig_util_HAS_SNI = util.HAS_SNI
orig_connection_ssl_wrap_socket = connection.ssl_wrap_socket
@@ -191,6 +182,11 @@ class WrappedSocket(object):
return b''
else:
raise
except OpenSSL.SSL.ZeroReturnError as e:
if self.connection.get_shutdown() == OpenSSL.SSL.RECEIVED_SHUTDOWN:
return b''
else:
raise
except OpenSSL.SSL.WantReadError:
rd, wd, ed = select.select(
[self.socket], [], [], self.socket.gettimeout())
@@ -216,13 +212,21 @@ class WrappedSocket(object):
continue
def sendall(self, data):
while len(data):
sent = self._send_until_done(data)
data = data[sent:]
if has_memoryview and not isinstance(data, memoryview):
data = memoryview(data)
total_sent = 0
while total_sent < len(data):
sent = self._send_until_done(data[total_sent:total_sent+SSL_WRITE_BLOCKSIZE])
total_sent += sent
def shutdown(self):
# FIXME rethrow compatible exceptions should we ever use this
self.connection.shutdown()
def close(self):
if self._makefile_refs < 1:
return self.connection.shutdown()
return self.connection.close()
else:
self._makefile_refs -= 1
@@ -263,7 +267,7 @@ def _verify_callback(cnx, x509, err_no, err_depth, return_code):
def ssl_wrap_socket(sock, keyfile=None, certfile=None, cert_reqs=None,
ca_certs=None, server_hostname=None,
ssl_version=None):
ssl_version=None, ca_cert_dir=None):
ctx = OpenSSL.SSL.Context(_openssl_versions[ssl_version])
if certfile:
keyfile = keyfile or certfile # Match behaviour of the normal python ssl library
@@ -272,9 +276,9 @@ def ssl_wrap_socket(sock, keyfile=None, certfile=None, cert_reqs=None,
ctx.use_privatekey_file(keyfile)
if cert_reqs != ssl.CERT_NONE:
ctx.set_verify(_openssl_verify[cert_reqs], _verify_callback)
if ca_certs:
if ca_certs or ca_cert_dir:
try:
ctx.load_verify_locations(ca_certs, None)
ctx.load_verify_locations(ca_certs, ca_cert_dir)
except OpenSSL.SSL.Error as e:
raise ssl.SSLError('bad ca_certs: %r' % ca_certs, e)
else:
@@ -294,10 +298,12 @@ def ssl_wrap_socket(sock, keyfile=None, certfile=None, cert_reqs=None,
try:
cnx.do_handshake()
except OpenSSL.SSL.WantReadError:
select.select([sock], [], [])
rd, _, _ = select.select([sock], [], [], sock.gettimeout())
if not rd:
raise timeout('select timed out')
continue
except OpenSSL.SSL.Error as e:
raise ssl.SSLError('bad handshake', e)
raise ssl.SSLError('bad handshake: %r' % e)
break
return WrappedSocket(cnx, sock)

View File

@@ -112,6 +112,9 @@ class ConnectTimeoutError(TimeoutError):
"Raised when a socket timeout occurs while connecting to a server"
pass
class NewConnectionError(ConnectTimeoutError, PoolError):
"Raised when we fail to establish a new connection. Usually ECONNREFUSED."
pass
class EmptyPoolError(PoolError):
"Raised when a pool runs out of connections and no more are allowed."
@@ -149,6 +152,11 @@ class SecurityWarning(HTTPWarning):
pass
class SubjectAltNameWarning(SecurityWarning):
"Warned when connecting to a host with a certificate missing a SAN."
pass
class InsecureRequestWarning(SecurityWarning):
"Warned when making an unverified HTTPS request."
pass
@@ -157,3 +165,29 @@ class InsecureRequestWarning(SecurityWarning):
class SystemTimeWarning(SecurityWarning):
"Warned when system time is suspected to be wrong"
pass
class InsecurePlatformWarning(SecurityWarning):
"Warned when certain SSL configuration is not available on a platform."
pass
class ResponseNotChunked(ProtocolError, ValueError):
"Response needs to be chunked in order to read it as chunks."
pass
class ProxySchemeUnknown(AssertionError, ValueError):
"ProxyManager does not support the supplied scheme"
# TODO(t-8ch): Stop inheriting from AssertionError in v2.0.
def __init__(self, scheme):
message = "Not supported proxy scheme %s" % scheme
super(ProxySchemeUnknown, self).__init__(message)
class HeaderParsingError(HTTPError):
"Raised by assert_header_parsing, but we convert it to a log.warning statement."
def __init__(self, defects, unparsed_data):
message = '%s, unparsed data: %r' % (defects or 'Unknown', unparsed_data)
super(HeaderParsingError, self).__init__(message)

View File

@@ -8,7 +8,7 @@ except ImportError:
from ._collections import RecentlyUsedContainer
from .connectionpool import HTTPConnectionPool, HTTPSConnectionPool
from .connectionpool import port_by_scheme
from .exceptions import LocationValueError
from .exceptions import LocationValueError, MaxRetryError, ProxySchemeUnknown
from .request import RequestMethods
from .util.url import parse_url
from .util.retry import Retry
@@ -64,6 +64,14 @@ class PoolManager(RequestMethods):
self.pools = RecentlyUsedContainer(num_pools,
dispose_func=lambda p: p.close())
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.clear()
# Return False to re-raise any potential exceptions
return False
def _new_pool(self, scheme, host, port):
"""
Create a new :class:`ConnectionPool` based on host, port and scheme.
@@ -167,7 +175,14 @@ class PoolManager(RequestMethods):
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect)
kw['retries'] = retries.increment(method, redirect_location)
try:
retries = retries.increment(method, url, response=response, _pool=conn)
except MaxRetryError:
if retries.raise_on_redirect:
raise
return response
kw['retries'] = retries
kw['redirect'] = redirect
log.info("Redirecting %s -> %s" % (url, redirect_location))
@@ -212,8 +227,8 @@ class ProxyManager(PoolManager):
port = port_by_scheme.get(proxy.scheme, 80)
proxy = proxy._replace(port=port)
assert proxy.scheme in ("http", "https"), \
'Not supported proxy scheme %s' % proxy.scheme
if proxy.scheme not in ("http", "https"):
raise ProxySchemeUnknown(proxy.scheme)
self.proxy = proxy
self.proxy_headers = proxy_headers or {}

View File

@@ -71,14 +71,22 @@ class RequestMethods(object):
headers=headers,
**urlopen_kw)
def request_encode_url(self, method, url, fields=None, **urlopen_kw):
def request_encode_url(self, method, url, fields=None, headers=None,
**urlopen_kw):
"""
Make a request using :meth:`urlopen` with the ``fields`` encoded in
the url. This is useful for request methods like GET, HEAD, DELETE, etc.
"""
if headers is None:
headers = self.headers
extra_kw = {'headers': headers}
extra_kw.update(urlopen_kw)
if fields:
url += '?' + urlencode(fields)
return self.urlopen(method, url, **urlopen_kw)
return self.urlopen(method, url, **extra_kw)
def request_encode_body(self, method, url, fields=None, headers=None,
encode_multipart=True, multipart_boundary=None,

View File

@@ -1,13 +1,16 @@
from contextlib import contextmanager
import zlib
import io
from socket import timeout as SocketTimeout
from ._collections import HTTPHeaderDict
from .exceptions import ProtocolError, DecodeError, ReadTimeoutError
from .packages.six import string_types as basestring, binary_type
from .exceptions import (
ProtocolError, DecodeError, ReadTimeoutError, ResponseNotChunked
)
from .packages.six import string_types as basestring, binary_type, PY3
from .packages.six.moves import http_client as httplib
from .connection import HTTPException, BaseSSLError
from .util.response import is_fp_closed
from .util.response import is_fp_closed, is_response_to_head
class DeflateDecoder(object):
@@ -21,6 +24,9 @@ class DeflateDecoder(object):
return getattr(self._obj, name)
def decompress(self, data):
if not data:
return data
if not self._first_try:
return self._obj.decompress(data)
@@ -36,9 +42,23 @@ class DeflateDecoder(object):
self._data = None
class GzipDecoder(object):
def __init__(self):
self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS)
def __getattr__(self, name):
return getattr(self._obj, name)
def decompress(self, data):
if not data:
return data
return self._obj.decompress(data)
def _get_decoder(mode):
if mode == 'gzip':
return zlib.decompressobj(16 + zlib.MAX_WBITS)
return GzipDecoder()
return DeflateDecoder()
@@ -76,9 +96,10 @@ class HTTPResponse(io.IOBase):
strict=0, preload_content=True, decode_content=True,
original_response=None, pool=None, connection=None):
self.headers = HTTPHeaderDict()
if headers:
self.headers.update(headers)
if isinstance(headers, HTTPHeaderDict):
self.headers = headers
else:
self.headers = HTTPHeaderDict(headers)
self.status = status
self.version = version
self.reason = reason
@@ -100,7 +121,17 @@ class HTTPResponse(io.IOBase):
if hasattr(body, 'read'):
self._fp = body
if preload_content and not self._body:
# Are we using the chunked-style of transfer encoding?
self.chunked = False
self.chunk_left = None
tr_enc = self.headers.get('transfer-encoding', '').lower()
# Don't incur the penalty of creating a list and then discarding it
encodings = (enc.strip() for enc in tr_enc.split(","))
if "chunked" in encodings:
self.chunked = True
# We certainly don't want to preload content when the response is chunked.
if not self.chunked and preload_content and not self._body:
self._body = self.read(decode_content=decode_content)
def get_redirect_location(self):
@@ -140,6 +171,76 @@ class HTTPResponse(io.IOBase):
"""
return self._fp_bytes_read
def _init_decoder(self):
"""
Set-up the _decoder attribute if necessar.
"""
# Note: content-encoding value should be case-insensitive, per RFC 7230
# Section 3.2
content_encoding = self.headers.get('content-encoding', '').lower()
if self._decoder is None and content_encoding in self.CONTENT_DECODERS:
self._decoder = _get_decoder(content_encoding)
def _decode(self, data, decode_content, flush_decoder):
"""
Decode the data passed in and potentially flush the decoder.
"""
try:
if decode_content and self._decoder:
data = self._decoder.decompress(data)
except (IOError, zlib.error) as e:
content_encoding = self.headers.get('content-encoding', '').lower()
raise DecodeError(
"Received response with content-encoding: %s, but "
"failed to decode it." % content_encoding, e)
if flush_decoder and decode_content and self._decoder:
buf = self._decoder.decompress(binary_type())
data += buf + self._decoder.flush()
return data
@contextmanager
def _error_catcher(self):
"""
Catch low-level python exceptions, instead re-raising urllib3
variants, so that low-level exceptions are not leaked in the
high-level api.
On exit, release the connection back to the pool.
"""
try:
try:
yield
except SocketTimeout:
# FIXME: Ideally we'd like to include the url in the ReadTimeoutError but
# there is yet no clean way to get at it from this context.
raise ReadTimeoutError(self._pool, None, 'Read timed out.')
except BaseSSLError as e:
# FIXME: Is there a better way to differentiate between SSLErrors?
if 'read operation timed out' not in str(e): # Defensive:
# This shouldn't happen but just in case we're missing an edge
# case, let's avoid swallowing SSL errors.
raise
raise ReadTimeoutError(self._pool, None, 'Read timed out.')
except HTTPException as e:
# This includes IncompleteRead.
raise ProtocolError('Connection broken: %r' % e, e)
except Exception:
# The response may not be closed but we're not going to use it anymore
# so close it now to ensure that the connection is released back to the pool.
if self._original_response and not self._original_response.isclosed():
self._original_response.close()
raise
finally:
if self._original_response and self._original_response.isclosed():
self.release_conn()
def read(self, amt=None, decode_content=None, cache_content=False):
"""
Similar to :meth:`httplib.HTTPResponse.read`, but with two additional
@@ -161,12 +262,7 @@ class HTTPResponse(io.IOBase):
after having ``.read()`` the file object. (Overridden if ``amt`` is
set.)
"""
# Note: content-encoding value should be case-insensitive, per RFC 7230
# Section 3.2
content_encoding = self.headers.get('content-encoding', '').lower()
if self._decoder is None:
if content_encoding in self.CONTENT_DECODERS:
self._decoder = _get_decoder(content_encoding)
self._init_decoder()
if decode_content is None:
decode_content = self.decode_content
@@ -174,67 +270,37 @@ class HTTPResponse(io.IOBase):
return
flush_decoder = False
data = None
try:
try:
if amt is None:
# cStringIO doesn't like amt=None
data = self._fp.read()
with self._error_catcher():
if amt is None:
# cStringIO doesn't like amt=None
data = self._fp.read()
flush_decoder = True
else:
cache_content = False
data = self._fp.read(amt)
if amt != 0 and not data: # Platform-specific: Buggy versions of Python.
# Close the connection when no data is returned
#
# This is redundant to what httplib/http.client _should_
# already do. However, versions of python released before
# December 15, 2012 (http://bugs.python.org/issue16298) do
# not properly close the connection in all cases. There is
# no harm in redundantly calling close.
self._fp.close()
flush_decoder = True
else:
cache_content = False
data = self._fp.read(amt)
if amt != 0 and not data: # Platform-specific: Buggy versions of Python.
# Close the connection when no data is returned
#
# This is redundant to what httplib/http.client _should_
# already do. However, versions of python released before
# December 15, 2012 (http://bugs.python.org/issue16298) do
# not properly close the connection in all cases. There is
# no harm in redundantly calling close.
self._fp.close()
flush_decoder = True
except SocketTimeout:
# FIXME: Ideally we'd like to include the url in the ReadTimeoutError but
# there is yet no clean way to get at it from this context.
raise ReadTimeoutError(self._pool, None, 'Read timed out.')
except BaseSSLError as e:
# FIXME: Is there a better way to differentiate between SSLErrors?
if not 'read operation timed out' in str(e): # Defensive:
# This shouldn't happen but just in case we're missing an edge
# case, let's avoid swallowing SSL errors.
raise
raise ReadTimeoutError(self._pool, None, 'Read timed out.')
except HTTPException as e:
# This includes IncompleteRead.
raise ProtocolError('Connection broken: %r' % e, e)
if data:
self._fp_bytes_read += len(data)
try:
if decode_content and self._decoder:
data = self._decoder.decompress(data)
except (IOError, zlib.error) as e:
raise DecodeError(
"Received response with content-encoding: %s, but "
"failed to decode it." % content_encoding, e)
if flush_decoder and decode_content and self._decoder:
buf = self._decoder.decompress(binary_type())
data += buf + self._decoder.flush()
data = self._decode(data, decode_content, flush_decoder)
if cache_content:
self._body = data
return data
return data
finally:
if self._original_response and self._original_response.isclosed():
self.release_conn()
def stream(self, amt=2**16, decode_content=None):
"""
@@ -252,11 +318,15 @@ class HTTPResponse(io.IOBase):
If True, will attempt to decode the body based on the
'content-encoding' header.
"""
while not is_fp_closed(self._fp):
data = self.read(amt=amt, decode_content=decode_content)
if self.chunked:
for line in self.read_chunked(amt, decode_content=decode_content):
yield line
else:
while not is_fp_closed(self._fp):
data = self.read(amt=amt, decode_content=decode_content)
if data:
yield data
if data:
yield data
@classmethod
def from_httplib(ResponseCls, r, **response_kw):
@@ -267,14 +337,17 @@ class HTTPResponse(io.IOBase):
Remaining parameters are passed to the HTTPResponse constructor, along
with ``original_response=r``.
"""
headers = r.msg
headers = HTTPHeaderDict()
for k, v in r.getheaders():
headers.add(k, v)
if not isinstance(headers, HTTPHeaderDict):
if PY3: # Python 3
headers = HTTPHeaderDict(headers.items())
else: # Python 2
headers = HTTPHeaderDict.from_httplib(headers)
# HTTPResponse objects in Python 3 don't have a .strict attribute
strict = getattr(r, 'strict', 0)
return ResponseCls(body=r,
resp = ResponseCls(body=r,
headers=headers,
status=r.status,
version=r.version,
@@ -282,6 +355,7 @@ class HTTPResponse(io.IOBase):
strict=strict,
original_response=r,
**response_kw)
return resp
# Backwards-compatibility methods for httplib.HTTPResponse
def getheaders(self):
@@ -331,3 +405,81 @@ class HTTPResponse(io.IOBase):
else:
b[:len(temp)] = temp
return len(temp)
def _update_chunk_length(self):
# First, we'll figure out length of a chunk and then
# we'll try to read it from socket.
if self.chunk_left is not None:
return
line = self._fp.fp.readline()
line = line.split(b';', 1)[0]
try:
self.chunk_left = int(line, 16)
except ValueError:
# Invalid chunked protocol response, abort.
self.close()
raise httplib.IncompleteRead(line)
def _handle_chunk(self, amt):
returned_chunk = None
if amt is None:
chunk = self._fp._safe_read(self.chunk_left)
returned_chunk = chunk
self._fp._safe_read(2) # Toss the CRLF at the end of the chunk.
self.chunk_left = None
elif amt < self.chunk_left:
value = self._fp._safe_read(amt)
self.chunk_left = self.chunk_left - amt
returned_chunk = value
elif amt == self.chunk_left:
value = self._fp._safe_read(amt)
self._fp._safe_read(2) # Toss the CRLF at the end of the chunk.
self.chunk_left = None
returned_chunk = value
else: # amt > self.chunk_left
returned_chunk = self._fp._safe_read(self.chunk_left)
self._fp._safe_read(2) # Toss the CRLF at the end of the chunk.
self.chunk_left = None
return returned_chunk
def read_chunked(self, amt=None, decode_content=None):
"""
Similar to :meth:`HTTPResponse.read`, but with an additional
parameter: ``decode_content``.
:param decode_content:
If True, will attempt to decode the body based on the
'content-encoding' header.
"""
self._init_decoder()
# FIXME: Rewrite this method and make it a class with a better structured logic.
if not self.chunked:
raise ResponseNotChunked("Response is not chunked. "
"Header 'transfer-encoding: chunked' is missing.")
# Don't bother reading the body of a HEAD request.
if self._original_response and is_response_to_head(self._original_response):
self._original_response.close()
return
with self._error_catcher():
while True:
self._update_chunk_length()
if self.chunk_left == 0:
break
chunk = self._handle_chunk(amt)
yield self._decode(chunk, decode_content=decode_content,
flush_decoder=True)
# Chunk content ends with \r\n: discard it.
while True:
line = self._fp.fp.readline()
if not line:
# Some sites may not end with '\r\n'.
break
if line == b'\r\n':
break
# We read everything; close the "file".
if self._original_response:
self._original_response.close()

View File

@@ -60,6 +60,8 @@ def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
@@ -78,15 +80,16 @@ def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
sock.connect(sa)
return sock
except socket.error as _:
err = _
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
raise err
else:
raise socket.error("getaddrinfo returns an empty list")
raise socket.error("getaddrinfo returns an empty list")
def _set_socket_options(sock, options):

View File

@@ -1,3 +1,8 @@
from ..packages.six.moves import http_client as httplib
from ..exceptions import HeaderParsingError
def is_fp_closed(obj):
"""
Checks whether a given file-like object is closed.
@@ -20,3 +25,49 @@ def is_fp_closed(obj):
pass
raise ValueError("Unable to determine whether fp is closed.")
def assert_header_parsing(headers):
"""
Asserts whether all headers have been successfully parsed.
Extracts encountered errors from the result of parsing headers.
Only works on Python 3.
:param headers: Headers to verify.
:type headers: `httplib.HTTPMessage`.
:raises urllib3.exceptions.HeaderParsingError:
If parsing errors are found.
"""
# This will fail silently if we pass in the wrong kind of parameter.
# To make debugging easier add an explicit check.
if not isinstance(headers, httplib.HTTPMessage):
raise TypeError('expected httplib.Message, got {}.'.format(
type(headers)))
defects = getattr(headers, 'defects', None)
get_payload = getattr(headers, 'get_payload', None)
unparsed_data = None
if get_payload: # Platform-specific: Python 3.
unparsed_data = get_payload()
if defects or unparsed_data:
raise HeaderParsingError(defects=defects, unparsed_data=unparsed_data)
def is_response_to_head(response):
"""
Checks, wether a the request of a response has been a HEAD-request.
Handles the quirks of AppEngine.
:param conn:
:type conn: :class:`httplib.HTTPResponse`
"""
# FIXME: Can we do this somehow without accessing private httplib _method?
method = response._method
if isinstance(method, int): # Platform-specific: Appengine
return method == 3
return method.upper() == 'HEAD'

View File

@@ -94,7 +94,7 @@ class Retry(object):
seconds. If the backoff_factor is 0.1, then :func:`.sleep` will sleep
for [0.1s, 0.2s, 0.4s, ...] between retries. It will never be longer
than :attr:`Retry.MAX_BACKOFF`.
than :attr:`Retry.BACKOFF_MAX`.
By default, backoff is disabled (set to 0).
@@ -190,7 +190,7 @@ class Retry(object):
return isinstance(err, (ReadTimeoutError, ProtocolError))
def is_forced_retry(self, method, status_code):
""" Is this method/response retryable? (Based on method/codes whitelists)
""" Is this method/status code retryable? (Based on method/codes whitelists)
"""
if self.method_whitelist and method.upper() not in self.method_whitelist:
return False

View File

@@ -1,17 +1,25 @@
from binascii import hexlify, unhexlify
from hashlib import md5, sha1
from hashlib import md5, sha1, sha256
from ..exceptions import SSLError
from ..exceptions import SSLError, InsecurePlatformWarning
SSLContext = None
HAS_SNI = False
create_default_context = None
# Maps the length of a digest to a possible hash function producing this digest
HASHFUNC_MAP = {
32: md5,
40: sha1,
64: sha256,
}
import errno
import ssl
import warnings
try: # Test for SSL features
import ssl
from ssl import wrap_socket, CERT_NONE, PROTOCOL_SSLv23
from ssl import HAS_SNI # Has SNI?
except ImportError:
@@ -24,14 +32,24 @@ except ImportError:
OP_NO_SSLv2, OP_NO_SSLv3 = 0x1000000, 0x2000000
OP_NO_COMPRESSION = 0x20000
try:
from ssl import _DEFAULT_CIPHERS
except ImportError:
_DEFAULT_CIPHERS = (
'ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+HIGH:'
'DH+HIGH:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+HIGH:RSA+3DES:ECDH+RC4:'
'DH+RC4:RSA+RC4:!aNULL:!eNULL:!MD5'
)
# A secure default.
# Sources for more information on TLS ciphers:
#
# - https://wiki.mozilla.org/Security/Server_Side_TLS
# - https://www.ssllabs.com/projects/best-practices/index.html
# - https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
#
# The general intent is:
# - Prefer cipher suites that offer perfect forward secrecy (DHE/ECDHE),
# - prefer ECDHE over DHE for better performance,
# - prefer any AES-GCM over any AES-CBC for better performance and security,
# - use 3DES as fallback which is secure but slow,
# - disable NULL authentication, MD5 MACs and DSS for security reasons.
DEFAULT_CIPHERS = (
'ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+HIGH:'
'DH+HIGH:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+HIGH:RSA+3DES:!aNULL:'
'!eNULL:!MD5'
)
try:
from ssl import SSLContext # Modern SSL?
@@ -39,7 +57,8 @@ except ImportError:
import sys
class SSLContext(object): # Platform-specific: Python 2 & 3.1
supports_set_ciphers = sys.version_info >= (2, 7)
supports_set_ciphers = ((2, 7) <= sys.version_info < (3,) or
(3, 2) <= sys.version_info)
def __init__(self, protocol_version):
self.protocol = protocol_version
@@ -56,8 +75,11 @@ except ImportError:
self.certfile = certfile
self.keyfile = keyfile
def load_verify_locations(self, location):
self.ca_certs = location
def load_verify_locations(self, cafile=None, capath=None):
self.ca_certs = cafile
if capath is not None:
raise SSLError("CA directories not supported in older Pythons")
def set_ciphers(self, cipher_suite):
if not self.supports_set_ciphers:
@@ -69,6 +91,14 @@ except ImportError:
self.ciphers = cipher_suite
def wrap_socket(self, socket, server_hostname=None):
warnings.warn(
'A true SSLContext object is not available. This prevents '
'urllib3 from configuring SSL appropriately and may cause '
'certain SSL connections to fail. For more information, see '
'https://urllib3.readthedocs.org/en/latest/security.html'
'#insecureplatformwarning.',
InsecurePlatformWarning
)
kwargs = {
'keyfile': self.keyfile,
'certfile': self.certfile,
@@ -92,30 +122,21 @@ def assert_fingerprint(cert, fingerprint):
Fingerprint as string of hexdigits, can be interspersed by colons.
"""
# Maps the length of a digest to a possible hash function producing
# this digest.
hashfunc_map = {
16: md5,
20: sha1
}
fingerprint = fingerprint.replace(':', '').lower()
digest_length, odd = divmod(len(fingerprint), 2)
if odd or digest_length not in hashfunc_map:
raise SSLError('Fingerprint is of invalid length.')
digest_length = len(fingerprint)
hashfunc = HASHFUNC_MAP.get(digest_length)
if not hashfunc:
raise SSLError(
'Fingerprint of invalid length: {0}'.format(fingerprint))
# We need encode() here for py32; works on py2 and p33.
fingerprint_bytes = unhexlify(fingerprint.encode())
hashfunc = hashfunc_map[digest_length]
cert_digest = hashfunc(cert).digest()
if not cert_digest == fingerprint_bytes:
if cert_digest != fingerprint_bytes:
raise SSLError('Fingerprints did not match. Expected "{0}", got "{1}".'
.format(hexlify(fingerprint_bytes),
hexlify(cert_digest)))
.format(fingerprint, hexlify(cert_digest)))
def resolve_cert_reqs(candidate):
@@ -157,7 +178,7 @@ def resolve_ssl_version(candidate):
return candidate
def create_urllib3_context(ssl_version=None, cert_reqs=ssl.CERT_REQUIRED,
def create_urllib3_context(ssl_version=None, cert_reqs=None,
options=None, ciphers=None):
"""All arguments have the same meaning as ``ssl_wrap_socket``.
@@ -194,6 +215,9 @@ def create_urllib3_context(ssl_version=None, cert_reqs=ssl.CERT_REQUIRED,
"""
context = SSLContext(ssl_version or ssl.PROTOCOL_SSLv23)
# Setting the default here, as we may have no ssl module on import
cert_reqs = ssl.CERT_REQUIRED if cert_reqs is None else cert_reqs
if options is None:
options = 0
# SSLv2 is easily broken and is considered harmful and dangerous
@@ -207,20 +231,23 @@ def create_urllib3_context(ssl_version=None, cert_reqs=ssl.CERT_REQUIRED,
context.options |= options
if getattr(context, 'supports_set_ciphers', True): # Platform-specific: Python 2.6
context.set_ciphers(ciphers or _DEFAULT_CIPHERS)
context.set_ciphers(ciphers or DEFAULT_CIPHERS)
context.verify_mode = cert_reqs
if getattr(context, 'check_hostname', None) is not None: # Platform-specific: Python 3.2
context.check_hostname = (context.verify_mode == ssl.CERT_REQUIRED)
# We do our own verification, including fingerprints and alternative
# hostnames. So disable it here
context.check_hostname = False
return context
def ssl_wrap_socket(sock, keyfile=None, certfile=None, cert_reqs=None,
ca_certs=None, server_hostname=None,
ssl_version=None, ciphers=None, ssl_context=None):
ssl_version=None, ciphers=None, ssl_context=None,
ca_cert_dir=None):
"""
All arguments except for server_hostname and ssl_context have the same
meaning as they do when using :func:`ssl.wrap_socket`.
All arguments except for server_hostname, ssl_context, and ca_cert_dir have
the same meaning as they do when using :func:`ssl.wrap_socket`.
:param server_hostname:
When SNI is supported, the expected hostname of the certificate
@@ -230,15 +257,19 @@ def ssl_wrap_socket(sock, keyfile=None, certfile=None, cert_reqs=None,
:param ciphers:
A string of ciphers we wish the client to support. This is not
supported on Python 2.6 as the ssl module does not support it.
:param ca_cert_dir:
A directory containing CA certificates in multiple separate files, as
supported by OpenSSL's -CApath flag or the capath argument to
SSLContext.load_verify_locations().
"""
context = ssl_context
if context is None:
context = create_urllib3_context(ssl_version, cert_reqs,
ciphers=ciphers)
if ca_certs:
if ca_certs or ca_cert_dir:
try:
context.load_verify_locations(ca_certs)
context.load_verify_locations(ca_certs, ca_cert_dir)
except IOError as e: # Platform-specific: Python 2.6, 2.7, 3.2
raise SSLError(e)
# Py33 raises FileNotFoundError which subclasses OSError
@@ -247,6 +278,7 @@ def ssl_wrap_socket(sock, keyfile=None, certfile=None, cert_reqs=None,
if e.errno == errno.ENOENT:
raise SSLError(e)
raise
if certfile:
context.load_cert_chain(certfile, keyfile)
if HAS_SNI: # Platform-specific: OpenSSL with enabled SNI

View File

@@ -15,6 +15,8 @@ class Url(namedtuple('Url', url_attrs)):
def __new__(cls, scheme=None, auth=None, host=None, port=None, path=None,
query=None, fragment=None):
if path and not path.startswith('/'):
path = '/' + path
return super(Url, cls).__new__(cls, scheme, auth, host, port, path,
query, fragment)

View File

@@ -62,12 +62,11 @@ def merge_setting(request_setting, session_setting, dict_class=OrderedDict):
merged_setting = dict_class(to_key_val_list(session_setting))
merged_setting.update(to_key_val_list(request_setting))
# Remove keys that are set to None.
for (k, v) in request_setting.items():
if v is None:
del merged_setting[k]
merged_setting = dict((k, v) for (k, v) in merged_setting.items() if v is not None)
# Remove keys that are set to None. Extract keys first to avoid altering
# the dictionary during iteration.
none_keys = [k for (k, v) in merged_setting.items() if v is None]
for key in none_keys:
del merged_setting[key]
return merged_setting
@@ -90,7 +89,7 @@ def merge_hooks(request_hooks, session_hooks, dict_class=OrderedDict):
class SessionRedirectMixin(object):
def resolve_redirects(self, resp, req, stream=False, timeout=None,
verify=True, cert=None, proxies=None):
verify=True, cert=None, proxies=None, **adapter_kwargs):
"""Receives a Response. Returns a generator of Responses."""
i = 0
@@ -171,7 +170,10 @@ class SessionRedirectMixin(object):
except KeyError:
pass
extract_cookies_to_jar(prepared_request._cookies, prepared_request, resp.raw)
# Extract any cookies sent on the response to the cookiejar
# in the new request. Because we've mutated our copied prepared
# request, use the old one that we haven't yet touched.
extract_cookies_to_jar(prepared_request._cookies, req, resp.raw)
prepared_request._cookies.update(self.cookies)
prepared_request.prepare_cookies(prepared_request._cookies)
@@ -190,6 +192,7 @@ class SessionRedirectMixin(object):
cert=cert,
proxies=proxies,
allow_redirects=False,
**adapter_kwargs
)
extract_cookies_to_jar(self.cookies, prepared_request, resp.raw)
@@ -271,6 +274,12 @@ class Session(SessionRedirectMixin):
>>> s = requests.Session()
>>> s.get('http://httpbin.org/get')
200
Or as a context manager::
>>> with requests.Session() as s:
>>> s.get('http://httpbin.org/get')
200
"""
__attrs__ = [
@@ -290,9 +299,9 @@ class Session(SessionRedirectMixin):
#: :class:`Request <Request>`.
self.auth = None
#: Dictionary mapping protocol to the URL of the proxy (e.g.
#: {'http': 'foo.bar:3128'}) to be used on each
#: :class:`Request <Request>`.
#: Dictionary mapping protocol or protocol and host to the URL of the proxy
#: (e.g. {'http': 'foo.bar:3128', 'http://host.name': 'foo.bar:4012'}) to
#: be used on each :class:`Request <Request>`.
self.proxies = {}
#: Event-handling hooks.
@@ -401,8 +410,8 @@ class Session(SessionRedirectMixin):
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary or bytes to be sent in the query
string for the :class:`Request`.
:param data: (optional) Dictionary or bytes to send in the body of the
:class:`Request`.
:param data: (optional) Dictionary, bytes, or file-like object to send
in the body of the :class:`Request`.
:param json: (optional) json to send in the body of the
:class:`Request`.
:param headers: (optional) Dictionary of HTTP Headers to send with the
@@ -414,13 +423,13 @@ class Session(SessionRedirectMixin):
:param auth: (optional) Auth tuple or callable to enable
Basic/Digest/Custom HTTP Auth.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a (`connect timeout, read
timeout <user/advanced.html#timeouts>`_) tuple.
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param allow_redirects: (optional) Set to True by default.
:type allow_redirects: bool
:param proxies: (optional) Dictionary mapping protocol to the URL of
the proxy.
:param proxies: (optional) Dictionary mapping protocol or protocol and
hostname to the URL of the proxy.
:param stream: (optional) whether to immediately download the response
content. Defaults to ``False``.
:param verify: (optional) if ``True``, the SSL cert will be verified.
@@ -557,10 +566,6 @@ class Session(SessionRedirectMixin):
# Set up variables needed for resolve_redirects and dispatching of hooks
allow_redirects = kwargs.pop('allow_redirects', True)
stream = kwargs.get('stream')
timeout = kwargs.get('timeout')
verify = kwargs.get('verify')
cert = kwargs.get('cert')
proxies = kwargs.get('proxies')
hooks = request.hooks
# Get the appropriate adapter to use
@@ -588,12 +593,7 @@ class Session(SessionRedirectMixin):
extract_cookies_to_jar(self.cookies, request, r.raw)
# Redirect resolving generator.
gen = self.resolve_redirects(r, request,
stream=stream,
timeout=timeout,
verify=verify,
cert=cert,
proxies=proxies)
gen = self.resolve_redirects(r, request, **kwargs)
# Resolve redirects if allowed.
history = [resp for resp in gen] if allow_redirects else []

View File

@@ -25,7 +25,8 @@ from . import __version__
from . import certs
from .compat import parse_http_list as _parse_list_header
from .compat import (quote, urlparse, bytes, str, OrderedDict, unquote, is_py2,
builtin_str, getproxies, proxy_bypass, urlunparse)
builtin_str, getproxies, proxy_bypass, urlunparse,
basestring)
from .cookies import RequestsCookieJar, cookiejar_from_dict
from .structures import CaseInsensitiveDict
from .exceptions import InvalidURL
@@ -66,7 +67,7 @@ def super_len(o):
return len(o.getvalue())
def get_netrc_auth(url):
def get_netrc_auth(url, raise_errors=False):
"""Returns the Requests tuple auth for a given url from netrc."""
try:
@@ -104,8 +105,9 @@ def get_netrc_auth(url):
return (_netrc[login_i], _netrc[2])
except (NetrcParseError, IOError):
# If there was a parsing error or a permissions issue reading the file,
# we'll just skip netrc auth
pass
# we'll just skip netrc auth unless explicitly asked to raise errors.
if raise_errors:
raise
# AppEngine hackiness.
except (ImportError, AttributeError):
@@ -115,7 +117,8 @@ def get_netrc_auth(url):
def guess_filename(obj):
"""Tries to guess the filename of the given object."""
name = getattr(obj, 'name', None)
if name and isinstance(name, builtin_str) and name[0] != '<' and name[-1] != '>':
if (name and isinstance(name, basestring) and name[0] != '<' and
name[-1] != '>'):
return os.path.basename(name)
@@ -418,10 +421,18 @@ def requote_uri(uri):
This function passes the given URI through an unquote/quote cycle to
ensure that it is fully and consistently quoted.
"""
# Unquote only the unreserved characters
# Then quote only illegal characters (do not quote reserved, unreserved,
# or '%')
return quote(unquote_unreserved(uri), safe="!#$%&'()*+,/:;=?@[]~")
safe_with_percent = "!#$%&'()*+,/:;=?@[]~"
safe_without_percent = "!#$&'()*+,/:;=?@[]~"
try:
# Unquote only the unreserved characters
# Then quote only illegal characters (do not quote reserved,
# unreserved, or '%')
return quote(unquote_unreserved(uri), safe=safe_with_percent)
except InvalidURL:
# We couldn't unquote the given URI, so let's try quoting it, but
# there may be unquoted '%'s in the URI. We need to make sure they're
# properly quoted so they do not cause issues elsewhere.
return quote(uri, safe=safe_without_percent)
def address_in_network(ip, net):
@@ -526,6 +537,18 @@ def get_environ_proxies(url):
else:
return getproxies()
def select_proxy(url, proxies):
"""Select a proxy for the url, if applicable.
:param url: The url being for the request
:param proxies: A dictionary of schemes or schemes and hosts to proxy URLs
"""
proxies = proxies or {}
urlparts = urlparse(url)
proxy = proxies.get(urlparts.scheme+'://'+urlparts.hostname)
if proxy is None:
proxy = proxies.get(urlparts.scheme)
return proxy
def default_user_agent(name="python-requests"):
"""Return a string representing the default user agent."""