Python and BeautifulSoup encoding issues


Python and BeautifulSoup encoding issues



I'm writing a crawler with Python using BeautifulSoup, and everything was going swimmingly till I ran into this site:



http://www.elnorte.ec/



I'm getting the contents with the requests library:


r = requests.get('http://www.elnorte.ec/')
content = r.content



If I do a print of the content variable at that point, all the spanish special characters seem to be working fine. However, once I try to feed the content variable to BeautifulSoup it all gets messed up:


soup = BeautifulSoup(content)
print(soup)
...
<a class="blogCalendarToday" href="/component/blog_calendar/?year=2011&amp;month=08&amp;day=27&amp;modid=203" title="1009 artículos en este día">
...



It's apparently garbling up all the spanish special characters (accents and whatnot). I've tried doing content.decode('utf-8'), content.decode('latin-1'), also tried messing around with the fromEncoding parameter to BeautifulSoup, setting it to fromEncoding='utf-8' and fromEncoding='latin-1', but still no dice.



Any pointers would be much appreciated.




5 Answers
5



could you try:


r = urllib.urlopen('http://www.elnorte.ec/')
x = BeautifulSoup.BeautifulSoup(r.read)
r.close()

print x.prettify('latin-1')



I get the correct output.
Oh, in this special case you could also x.__str__(encoding='latin1').


x.__str__(encoding='latin1')



I guess this is because the content is in ISO-8859-1(5) and the meta http-equiv content-type incorrectly says "UTF-8".



Could you confirm?





Hi Gaikokujin, thanks for your answer. You're quite right, if I prettify it with the 'latin-1' parameter, I get the string back with all the right accents and all. However, I need to go through the soup to process the links, and if I try to make a soup out of the string again, it messes up the accents again.
– David
Aug 28 '11 at 20:10





Actually, never mind, now I'm getting an error when trying your suggestion: UnicodeEncodeError: 'latin-1' codec can't encode characters in position 62-63: ordinal not in range(256)
– David
Aug 28 '11 at 20:36






It seems to work again if i do: x = BeautifulSoup.BeautifulSoup(r.read(), fromEncoding='latin-1'), but again, if I try to make a new soup out of the prettified string, it messes it up again :/
– David
Aug 28 '11 at 20:39





Finally got it, just had to: soup = BeautifulSoup(content, fromEncoding='latin-1') then when it got time to parse the links: i_title = item.contents[0].encode('latin-1').decode('utf-8') that seemed to do the trick. Thanks for your help :)
– David
Aug 28 '11 at 20:46





The code seems to be wrong (double BeatifulSoup?): AttributeError: type object 'BeautifulSoup' has no attribute 'BeautifulSoup' - maybe the interface changed?
– S.B.
Mar 30 '16 at 12:49



BeatifulSoup



In your case this page has wrong utf-8 data which confuses BeautifulSoup and makes it think that your page uses windows-1252, you can do this trick:


soup = BeautifulSoup.BeautifulSoup(content.decode('utf-8','ignore'))



by doing this you will discard any wrong symbols from the page source and BeautifulSoup will guess the encoding correctly.



You can replace 'ignore' by 'replace' and check text for '?' symbols to see what has been discarded.



Actually it's a very hard task to write crawler which can guess page encoding every time with 100% chance(Browsers are very good at this nowadays), you can use modules like 'chardet' but, for example, in your case it will guess encoding as ISO-8859-2, which is not correct too.



If you really need to be able to get encoding for any page user can possibly supply - you should either build a multi-level(try utf-8, try latin1, try etc...) detection function(like we did in our project) or use some detection code from firefox or chromium as C module.



The first answer is right, this functions some times are efective.


def __if_number_get_string(number):
converted_str = number
if isinstance(number, int) or
isinstance(number, float):
converted_str = str(number)
return converted_str


def get_unicode(strOrUnicode, encoding='utf-8'):
strOrUnicode = __if_number_get_string(strOrUnicode)
if isinstance(strOrUnicode, unicode):
return strOrUnicode
return unicode(strOrUnicode, encoding, errors='ignore')

def get_string(strOrUnicode, encoding='utf-8'):
strOrUnicode = __if_number_get_string(strOrUnicode)
if isinstance(strOrUnicode, unicode):
return strOrUnicode.encode(encoding)
return strOrUnicode



I'd suggest taking a more methodical fool proof approach.


# 1. get the raw data
raw = urllib.urlopen('http://www.elnorte.ec/').read()

# 2. detect the encoding and convert to unicode
content = toUnicode(raw) # see my caricature for toUnicode below

# 3. pass unicode to beautiful soup.
soup = BeautifulSoup(content)


def toUnicode(s):
if type(s) is unicode:
return s
elif type(s) is str:
d = chardet.detect(s)
(cs, conf) = (d['encoding'], d['confidence'])
if conf > 0.80:
try:
return s.decode( cs, errors = 'replace' )
except Exception as ex:
pass
# force and return only ascii subset
return unicode(''.join( [ i if ord(i) < 128 else ' ' for i in s ]))



You can reason no matter what you throw at this, it will always send valid unicode to bs.



As a result your parsed tree will behave much better and not fail in newer more interesting ways every time you have new data.



Trial and Error doesnt work in Code - There are just too many combinations :-)



You can try this, which works for every encoding


from bs4 import BeautifulSoup
from bs4.dammit import EncodingDetector
headers = {"User-Agent": USERAGENT}
resp = requests.get(url, headers=headers)
http_encoding = resp.encoding if 'charset' in resp.headers.get('content-type', '').lower() else None
html_encoding = EncodingDetector.find_declared_encoding(resp.content, is_html=True)
encoding = html_encoding or http_encoding
soup = BeautifulSoup(resp.content, 'lxml', from_encoding=encoding)





nice answer, but I would drop the headers (not really needed and since you didn't define USERAGENT the code cannot be blindly copy-pasted).
– Derlin
Jun 29 at 9:19


headers


USERAGENT






By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Comments

Popular posts from this blog

paramiko-expect timeout is happening after executing the command

Export result set on Dbeaver to CSV

Opening a url is failing in Swift